diff -Nru python-urllib3-1.19.1/CHANGES.rst python-urllib3-1.21.1/CHANGES.rst --- python-urllib3-1.19.1/CHANGES.rst 2016-11-16 10:03:57.000000000 +0000 +++ python-urllib3-1.21.1/CHANGES.rst 2017-05-02 10:56:31.000000000 +0000 @@ -1,11 +1,96 @@ Changes ======= +1.21.1 (2017-05-02) +------------------- + +* Fixed SecureTransport issue that would cause long delays in response body + delivery. (Pull #1154) + +* Fixed regression in 1.21 that threw exceptions when users passed the + ``socket_options`` flag to the ``PoolManager``. (Issue #1165) + +* Fixed regression in 1.21 that threw exceptions when users passed the + ``assert_hostname`` or ``assert_fingerprint`` flag to the ``PoolManager``. + (Pull #1157) + + +1.21 (2017-04-25) +----------------- + +* Improved performance of certain selector system calls on Python 3.5 and + later. (Pull #1095) + +* Resolved issue where the PyOpenSSL backend would not wrap SysCallError + exceptions appropriately when sending data. (Pull #1125) + +* Selectors now detects a monkey-patched select module after import for modules + that patch the select module like eventlet, greenlet. (Pull #1128) + +* Reduced memory consumption when streaming zlib-compressed responses + (as opposed to raw deflate streams). (Pull #1129) + +* Connection pools now use the entire request context when constructing the + pool key. (Pull #1016) + +* ``PoolManager.connection_from_*`` methods now accept a new keyword argument, + ``pool_kwargs``, which are merged with the existing ``connection_pool_kw``. + (Pull #1016) + +* Add retry counter for ``status_forcelist``. (Issue #1147) + +* Added ``contrib`` module for using SecureTransport on macOS: + ``urllib3.contrib.securetransport``. (Pull #1122) + +* urllib3 now only normalizes the case of ``http://`` and ``https://`` schemes: + for schemes it does not recognise, it assumes they are case-sensitive and + leaves them unchanged. + (Issue #1080) + + +1.20 (2017-01-19) +----------------- + +* Added support for waiting for I/O using selectors other than select, + improving urllib3's behaviour with large numbers of concurrent connections. + (Pull #1001) + +* Updated the date for the system clock check. (Issue #1005) + +* ConnectionPools now correctly consider hostnames to be case-insensitive. + (Issue #1032) + +* Outdated versions of PyOpenSSL now cause the PyOpenSSL contrib module + to fail when it is injected, rather than at first use. (Pull #1063) + +* Outdated versions of cryptography now cause the PyOpenSSL contrib module + to fail when it is injected, rather than at first use. (Issue #1044) + +* Automatically attempt to rewind a file-like body object when a request is + retried or redirected. (Pull #1039) + +* Fix some bugs that occur when modules incautiously patch the queue module. + (Pull #1061) + +* Prevent retries from occuring on read timeouts for which the request method + was not in the method whitelist. (Issue #1059) + +* Changed the PyOpenSSL contrib module to lazily load idna to avoid + unnecessarily bloating the memory of programs that don't need it. (Pull + #1076) + +* Add support for IPv6 literals with zone identifiers. (Pull #1013) + +* Added support for socks5h:// and socks4a:// schemes when working with SOCKS + proxies, and controlled remote DNS appropriately. (Issue #1035) + + 1.19.1 (2016-11-16) ------------------- * Fixed AppEngine import that didn't function on Python 3.5. (Pull #1025) + 1.19 (2016-11-03) ----------------- diff -Nru python-urllib3-1.19.1/CONTRIBUTORS.txt python-urllib3-1.21.1/CONTRIBUTORS.txt --- python-urllib3-1.19.1/CONTRIBUTORS.txt 2016-11-16 09:43:02.000000000 +0000 +++ python-urllib3-1.21.1/CONTRIBUTORS.txt 2017-04-25 11:28:36.000000000 +0000 @@ -212,5 +212,23 @@ * Added length_remaining to determine remaining data to be read. * Added enforce_content_length to raise exception when incorrect content-length received. +* Seth Michael Larson + * Created selectors backport that supports PEP 475. + +* Alexandre Dias + * Don't retry on timeout if method not in whitelist + +* Moinuddin Quadri + * Lazily load idna package + +* Tom White + * Made SOCKS handler differentiate socks5h from socks5 and socks4a from socks4. + +* Tim Burke + * Stop buffering entire deflate-encoded responses. + +* Tuukka Mustonen + * Add counter for status_forcelist retries. + * [Your name or handle] <[email or website]> * [Brief summary of your changes] diff -Nru python-urllib3-1.19.1/debian/changelog python-urllib3-1.21.1/debian/changelog --- python-urllib3-1.19.1/debian/changelog 2017-05-02 09:24:57.000000000 +0000 +++ python-urllib3-1.21.1/debian/changelog 2017-10-23 08:21:57.000000000 +0000 @@ -1,8 +1,24 @@ -python-urllib3 (1.19.1-1+certbot~xenial+1) xenial; urgency=medium +python-urllib3 (1.21.1-1+ubuntu16.04.1+certbot+1) xenial; urgency=medium - * Backport for Xenial Xerus (16.04 LTS) + * No-change backport to xenial - -- Ondřej Surý Tue, 02 May 2017 11:24:57 +0200 + -- Ondřej Surý Mon, 23 Oct 2017 08:21:57 +0000 + +python-urllib3 (1.21.1-1) unstable; urgency=medium + + * New upstream release. (Closes: #861642) + * debian/control + - Add python-psutil{,3} to Build-Depends. + - Add version constraint for six. (Closes: #857006) + - Bump Standards-Version to 4.0.0 (no changes needed). + * debian/copyright + - Update copyright years. + * debian/patches/01_do-not-use-embedded-python-six.patch + - Refresh. + * debian/tests/control + - Add autodep8 tests. (Closes: #796717) + + -- Daniele Tricoli Fri, 14 Jul 2017 01:21:44 +0200 python-urllib3 (1.19.1-1) unstable; urgency=medium diff -Nru python-urllib3-1.19.1/debian/control python-urllib3-1.21.1/debian/control --- python-urllib3-1.19.1/debian/control 2016-12-12 22:04:02.000000000 +0000 +++ python-urllib3-1.21.1/debian/control 2017-07-13 23:21:44.000000000 +0000 @@ -10,17 +10,19 @@ python-coverage (>= 3.6), python-mock, python-nose (>=1.3.3), + python-psutil, python-setuptools, - python-six, + python-six (>= 1.10.0), python-tornado, python3-all, python3-coverage (>= 3.6), python3-mock, python3-nose (>=1.3.3), + python3-psutil, python3-setuptools, - python3-six, + python3-six (>= 1.10.0), python3-tornado -Standards-Version: 3.9.8 +Standards-Version: 4.0.0 X-Python-Version: >= 2.6 X-Python3-Version: >= 3.2 Homepage: http://urllib3.readthedocs.org diff -Nru python-urllib3-1.19.1/debian/copyright python-urllib3-1.21.1/debian/copyright --- python-urllib3-1.19.1/debian/copyright 2016-12-12 22:04:02.000000000 +0000 +++ python-urllib3-1.21.1/debian/copyright 2017-07-13 23:21:44.000000000 +0000 @@ -20,7 +20,7 @@ License: PSF-2 Files: debian/* -Copyright: 2012-2016, Daniele Tricoli +Copyright: 2012-2017, Daniele Tricoli License: Expat License: Expat diff -Nru python-urllib3-1.19.1/debian/.git-dpm python-urllib3-1.21.1/debian/.git-dpm --- python-urllib3-1.19.1/debian/.git-dpm 2016-12-12 22:04:02.000000000 +0000 +++ python-urllib3-1.21.1/debian/.git-dpm 2017-07-13 23:21:44.000000000 +0000 @@ -1,11 +1,11 @@ # see git-dpm(1) from git-dpm package -37999f5181c1aec2ea41370cb82e5f559a7ba59c -37999f5181c1aec2ea41370cb82e5f559a7ba59c -0dbe96444faad3b0e3c39c482d8a7244cc7f9a8d -0dbe96444faad3b0e3c39c482d8a7244cc7f9a8d -python-urllib3_1.19.1.orig.tar.gz -aaa708899894cfc6e59e8e1abe8768dc9449b0a4 -187416 +ddf0645459fd4966860380c777cc6a79700a881d +ddf0645459fd4966860380c777cc6a79700a881d +6dde9b057102818141f6e8dc08be3f479d082724 +6dde9b057102818141f6e8dc08be3f479d082724 +python-urllib3_1.21.1.orig.tar.gz +f0112a21bede6876e502071a531f630d01be4ed5 +224266 debianTag="debian/%e%v" patchedTag="patched/%e%v" upstreamTag="upstream/%e%u" diff -Nru python-urllib3-1.19.1/debian/patches/01_do-not-use-embedded-python-six.patch python-urllib3-1.21.1/debian/patches/01_do-not-use-embedded-python-six.patch --- python-urllib3-1.19.1/debian/patches/01_do-not-use-embedded-python-six.patch 2016-12-12 22:04:02.000000000 +0000 +++ python-urllib3-1.21.1/debian/patches/01_do-not-use-embedded-python-six.patch 2017-07-13 23:21:44.000000000 +0000 @@ -1,4 +1,4 @@ -From 562c54c70ab3c0ad9e49c4bf5460506d9dab00b3 Mon Sep 17 00:00:00 2001 +From c98d95645c5a2f028d74c26c088a835bbe592ddc Mon Sep 17 00:00:00 2001 From: Daniele Tricoli Date: Thu, 8 Oct 2015 13:19:46 -0700 Subject: Do not use embedded copy of python-six. @@ -6,14 +6,16 @@ Forwarded: not-needed Patch-Name: 01_do-not-use-embedded-python-six.patch + +fix six --- dummyserver/handlers.py | 6 +++--- test/__init__.py | 2 +- - test/contrib/test_pyopenssl.py | 2 +- test/test_collections.py | 2 +- test/test_connectionpool.py | 4 ++-- test/test_fields.py | 2 +- test/test_filepost.py | 2 +- + test/test_queue_monkeypatch.py | 2 +- test/test_response.py | 2 +- test/test_retry.py | 2 +- test/test_util.py | 2 +- @@ -37,7 +39,7 @@ 27 files changed, 37 insertions(+), 37 deletions(-) diff --git a/dummyserver/handlers.py b/dummyserver/handlers.py -index bc6ad94..f51b371 100644 +index 44f5cda..bacf31a 100644 --- a/dummyserver/handlers.py +++ b/dummyserver/handlers.py @@ -15,8 +15,8 @@ from tornado import httputil @@ -51,7 +53,7 @@ log = logging.getLogger(__name__) -@@ -304,7 +304,7 @@ def _parse_header(line): +@@ -308,7 +308,7 @@ def _parse_header(line): """ import tornado.httputil import email.utils @@ -61,46 +63,33 @@ line = line.encode('utf-8') parts = tornado.httputil._parseparam(';' + line) diff --git a/test/__init__.py b/test/__init__.py -index bab39ed..076cdf0 100644 +index 1983040..72ed38c 100644 --- a/test/__init__.py +++ b/test/__init__.py -@@ -8,7 +8,7 @@ import socket +@@ -9,7 +9,7 @@ import platform from nose.plugins.skip import SkipTest - from urllib3.exceptions import MaxRetryError, HTTPWarning + from urllib3.exceptions import HTTPWarning -from urllib3.packages import six +import six + from urllib3.util import ssl_ # We need a host that will not immediately close the connection with a TCP - # Reset. SO suggests this hostname -diff --git a/test/contrib/test_pyopenssl.py b/test/contrib/test_pyopenssl.py -index e88edde..376b445 100644 ---- a/test/contrib/test_pyopenssl.py -+++ b/test/contrib/test_pyopenssl.py -@@ -2,7 +2,7 @@ - import unittest - - from nose.plugins.skip import SkipTest --from urllib3.packages import six -+import six - - try: - from urllib3.contrib.pyopenssl import (inject_into_urllib3, diff --git a/test/test_collections.py b/test/test_collections.py -index 9d72939..78ef634 100644 +index ce9c8ae..0078c73 100644 --- a/test/test_collections.py +++ b/test/test_collections.py -@@ -4,7 +4,7 @@ from urllib3._collections import ( - HTTPHeaderDict, - RecentlyUsedContainer as Container +@@ -6,7 +6,7 @@ from urllib3._collections import ( ) + from nose.plugins.skip import SkipTest + -from urllib3.packages import six +import six xrange = six.moves.xrange - from nose.plugins.skip import SkipTest + diff --git a/test/test_connectionpool.py b/test/test_connectionpool.py -index 2fab0c6..4d8f7de 100644 +index 8836e9a..df84eb5 100644 --- a/test/test_connectionpool.py +++ b/test/test_connectionpool.py @@ -10,8 +10,8 @@ from urllib3.connectionpool import ( @@ -115,20 +104,20 @@ from urllib3.exceptions import ( ClosedPoolError, diff --git a/test/test_fields.py b/test/test_fields.py -index 27dad92..61b9b9c 100644 +index f531d9e..6ad6437 100644 --- a/test/test_fields.py +++ b/test/test_fields.py @@ -1,7 +1,7 @@ import unittest from urllib3.fields import guess_content_type, RequestField --from urllib3.packages.six import u, PY3 -+from six import u, PY3 +-from urllib3.packages.six import u ++from six import u from . import onlyPy2 diff --git a/test/test_filepost.py b/test/test_filepost.py -index 390dbb3..ecc6710 100644 +index f744a96..4d3eaf3 100644 --- a/test/test_filepost.py +++ b/test/test_filepost.py @@ -2,7 +2,7 @@ import unittest @@ -140,8 +129,21 @@ BOUNDARY = '!! test boundary !!' +diff --git a/test/test_queue_monkeypatch.py b/test/test_queue_monkeypatch.py +index 2f50b90..867d951 100644 +--- a/test/test_queue_monkeypatch.py ++++ b/test/test_queue_monkeypatch.py +@@ -5,7 +5,7 @@ import sys + + import urllib3 + from urllib3.exceptions import EmptyPoolError +-from urllib3.packages.six.moves import queue ++from six.moves import queue + + if sys.version_info >= (2, 7): + import unittest diff --git a/test/test_response.py b/test/test_response.py -index 10aa410..e3ea351 100644 +index 5146b1f..ed5f45c 100644 --- a/test/test_response.py +++ b/test/test_response.py @@ -7,7 +7,7 @@ from urllib3.response import HTTPResponse @@ -154,7 +156,7 @@ from urllib3.util.response import is_fp_closed diff --git a/test/test_retry.py b/test/test_retry.py -index 550dbfc..c57b48e 100644 +index dbe4dc0..9f631b6 100644 --- a/test/test_retry.py +++ b/test/test_retry.py @@ -1,7 +1,7 @@ @@ -167,10 +169,10 @@ from urllib3.exceptions import ( ConnectTimeoutError, diff --git a/test/test_util.py b/test/test_util.py -index 23a9fe2..e0b1f1d 100644 +index 812aae7..0d62c41 100644 --- a/test/test_util.py +++ b/test/test_util.py -@@ -37,7 +37,7 @@ from urllib3.util.connection import ( +@@ -38,7 +38,7 @@ from urllib3.util.connection import ( _has_ipv6 ) from urllib3.util import is_fp_closed, ssl_ @@ -180,7 +182,7 @@ from . import clear_warnings diff --git a/test/with_dummyserver/test_chunked_transfer.py b/test/with_dummyserver/test_chunked_transfer.py -index 04f4f8b..866689c 100644 +index ba5251b..2de4a04 100644 --- a/test/with_dummyserver/test_chunked_transfer.py +++ b/test/with_dummyserver/test_chunked_transfer.py @@ -1,7 +1,7 @@ @@ -193,12 +195,12 @@ diff --git a/test/with_dummyserver/test_connectionpool.py b/test/with_dummyserver/test_connectionpool.py -index e98258c..cd21496 100644 +index 6da64ee..8990c99 100644 --- a/test/with_dummyserver/test_connectionpool.py +++ b/test/with_dummyserver/test_connectionpool.py -@@ -29,8 +29,8 @@ from urllib3.exceptions import ( - ProtocolError, +@@ -25,8 +25,8 @@ from urllib3.exceptions import ( NewConnectionError, + UnrewindableBodyError, ) -from urllib3.packages.six import b, u -from urllib3.packages.six.moves.urllib.parse import urlencode @@ -208,7 +210,7 @@ from urllib3.util.timeout import Timeout diff --git a/test/with_dummyserver/test_https.py b/test/with_dummyserver/test_https.py -index dc9b1f7..34473ab 100644 +index 9e3de75..08bc4f6 100644 --- a/test/with_dummyserver/test_https.py +++ b/test/with_dummyserver/test_https.py @@ -37,7 +37,7 @@ from urllib3.exceptions import ( @@ -221,7 +223,7 @@ import urllib3.util as util diff --git a/urllib3/_collections.py b/urllib3/_collections.py -index 77cee01..114116c 100644 +index 4849dde..8d0df8f 100644 --- a/urllib3/_collections.py +++ b/urllib3/_collections.py @@ -15,7 +15,7 @@ try: # Python 2.7+ @@ -234,7 +236,7 @@ __all__ = ['RecentlyUsedContainer', 'HTTPHeaderDict'] diff --git a/urllib3/connection.py b/urllib3/connection.py -index e24f5e3..568eb88 100644 +index c0d8329..cf72daf 100644 --- a/urllib3/connection.py +++ b/urllib3/connection.py @@ -6,9 +6,9 @@ import sys @@ -251,7 +253,7 @@ try: # Compiled with SSL? import ssl diff --git a/urllib3/connectionpool.py b/urllib3/connectionpool.py -index 19d08f2..ad8f35a 100644 +index b4f1166..d32419f 100644 --- a/urllib3/connectionpool.py +++ b/urllib3/connectionpool.py @@ -24,8 +24,8 @@ from .exceptions import ( @@ -259,14 +261,14 @@ ) from .packages.ssl_match_hostname import CertificateError -from .packages import six --from .packages.six.moves.queue import LifoQueue, Empty, Full +-from .packages.six.moves import queue +import six -+from six.moves.queue import LifoQueue, Empty, Full ++from six.moves import queue from .connection import ( port_by_scheme, DummyConnection, diff --git a/urllib3/contrib/appengine.py b/urllib3/contrib/appengine.py -index c3249ee..fea7cb2 100644 +index 814b022..825adfa 100644 --- a/urllib3/contrib/appengine.py +++ b/urllib3/contrib/appengine.py @@ -42,7 +42,7 @@ from __future__ import absolute_import @@ -301,7 +303,7 @@ log = getLogger(__name__) diff --git a/urllib3/exceptions.py b/urllib3/exceptions.py -index 8a091c1..58ba5d0 100644 +index 6c4be58..515113c 100644 --- a/urllib3/exceptions.py +++ b/urllib3/exceptions.py @@ -1,5 +1,5 @@ @@ -340,7 +342,7 @@ writer = codecs.lookup('utf-8')[3] diff --git a/urllib3/poolmanager.py b/urllib3/poolmanager.py -index 276b54d..fde0cd4 100644 +index 4ae9174..572815c 100644 --- a/urllib3/poolmanager.py +++ b/urllib3/poolmanager.py @@ -7,7 +7,7 @@ from ._collections import RecentlyUsedContainer @@ -366,7 +368,7 @@ __all__ = ['RequestMethods'] diff --git a/urllib3/response.py b/urllib3/response.py -index 6f1b63c..38640dc 100644 +index 408d999..59c6e63 100644 --- a/urllib3/response.py +++ b/urllib3/response.py @@ -11,8 +11,8 @@ from .exceptions import ( @@ -381,18 +383,18 @@ from .util.response import is_fp_closed, is_response_to_head diff --git a/urllib3/util/request.py b/urllib3/util/request.py -index 7377931..40bf0b4 100644 +index 3ddfcd5..da1249e 100644 --- a/urllib3/util/request.py +++ b/urllib3/util/request.py @@ -1,7 +1,7 @@ from __future__ import absolute_import from base64 import b64encode --from ..packages.six import b -+from six import b +-from ..packages.six import b, integer_types ++from six import b, integer_types + from ..exceptions import UnrewindableBodyError ACCEPT_ENCODING = 'gzip,deflate' - diff --git a/urllib3/util/response.py b/urllib3/util/response.py index 67cf730..9be555f 100644 --- a/urllib3/util/response.py @@ -405,7 +407,7 @@ from ..exceptions import HeaderParsingError diff --git a/urllib3/util/retry.py b/urllib3/util/retry.py -index 47ad539..19f0380 100644 +index c603cb4..e8a04a1 100644 --- a/urllib3/util/retry.py +++ b/urllib3/util/retry.py @@ -14,7 +14,7 @@ from ..exceptions import ( diff -Nru python-urllib3-1.19.1/debian/patches/02_require-cert-verification.patch python-urllib3-1.21.1/debian/patches/02_require-cert-verification.patch --- python-urllib3-1.19.1/debian/patches/02_require-cert-verification.patch 2016-12-12 22:04:02.000000000 +0000 +++ python-urllib3-1.21.1/debian/patches/02_require-cert-verification.patch 2017-07-13 23:21:44.000000000 +0000 @@ -1,4 +1,4 @@ -From dcd89660d546c3d46c7778278c0143e0602b4a20 Mon Sep 17 00:00:00 2001 +From c717d5c726d5f3033a698e643269850a94f11f8a Mon Sep 17 00:00:00 2001 From: Jamie Strandboge Date: Thu, 8 Oct 2015 13:19:47 -0700 Subject: require SSL certificate validation by default by using @@ -14,10 +14,10 @@ 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/urllib3/connectionpool.py b/urllib3/connectionpool.py -index ad8f35a..a3a328e 100644 +index d32419f..9a7ab95 100644 --- a/urllib3/connectionpool.py +++ b/urllib3/connectionpool.py -@@ -735,6 +735,8 @@ class HTTPSConnectionPool(HTTPConnectionPool): +@@ -744,6 +744,8 @@ class HTTPSConnectionPool(HTTPConnectionPool): ``ca_cert_dir``, and ``ssl_version`` are only used if :mod:`ssl` is available and are fed into :meth:`urllib3.util.ssl_wrap_socket` to upgrade the connection socket into an SSL socket. @@ -26,7 +26,7 @@ """ scheme = 'https' -@@ -744,8 +746,8 @@ class HTTPSConnectionPool(HTTPConnectionPool): +@@ -753,8 +755,8 @@ class HTTPSConnectionPool(HTTPConnectionPool): strict=False, timeout=Timeout.DEFAULT_TIMEOUT, maxsize=1, block=False, headers=None, retries=None, _proxy=None, _proxy_headers=None, diff -Nru python-urllib3-1.19.1/debian/patches/04_relax_nosetests_options.patch python-urllib3-1.21.1/debian/patches/04_relax_nosetests_options.patch --- python-urllib3-1.19.1/debian/patches/04_relax_nosetests_options.patch 2016-12-12 22:04:02.000000000 +0000 +++ python-urllib3-1.21.1/debian/patches/04_relax_nosetests_options.patch 2017-07-13 23:21:44.000000000 +0000 @@ -1,4 +1,4 @@ -From 8e646fc8a8917fd2b829894d3097c59e735650e5 Mon Sep 17 00:00:00 2001 +From b132bdaaba47f8be0a85870362aee9d7bc9ffb08 Mon Sep 17 00:00:00 2001 From: Daniele Tricoli Date: Thu, 8 Oct 2015 13:19:50 -0700 Subject: Do not use logging-clear-handlers to see all logging output and @@ -14,7 +14,7 @@ 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/setup.cfg b/setup.cfg -index fbaace4..0cdcef9 100644 +index e585a59..4bd8576 100644 --- a/setup.cfg +++ b/setup.cfg @@ -1,5 +1,5 @@ diff -Nru python-urllib3-1.19.1/debian/patches/05_avoid-embedded-ssl-match-hostname.patch python-urllib3-1.21.1/debian/patches/05_avoid-embedded-ssl-match-hostname.patch --- python-urllib3-1.19.1/debian/patches/05_avoid-embedded-ssl-match-hostname.patch 2016-12-12 22:04:02.000000000 +0000 +++ python-urllib3-1.21.1/debian/patches/05_avoid-embedded-ssl-match-hostname.patch 2017-07-13 23:21:44.000000000 +0000 @@ -1,4 +1,4 @@ -From 37999f5181c1aec2ea41370cb82e5f559a7ba59c Mon Sep 17 00:00:00 2001 +From ddf0645459fd4966860380c777cc6a79700a881d Mon Sep 17 00:00:00 2001 From: Stefano Rivera Date: Thu, 8 Oct 2015 13:19:51 -0700 Subject: Do not use embedded copy of ssl.match_hostname, when possible diff -Nru python-urllib3-1.19.1/debian/tests/control python-urllib3-1.21.1/debian/tests/control --- python-urllib3-1.19.1/debian/tests/control 1970-01-01 00:00:00.000000000 +0000 +++ python-urllib3-1.21.1/debian/tests/control 2017-07-13 23:21:44.000000000 +0000 @@ -0,0 +1,6 @@ +Test-Command: set -e ; for py in $(pyversions -r 2>/dev/null) ; do cd "$ADTTMP" ; echo "Testing with $py:" ; $py -c "import urllib3; print urllib3" ; done +Depends: python-all, python-urllib3 + +Test-Command: set -e ; for py in $(py3versions -r 2>/dev/null) ; do cd "$ADTTMP" ; echo "Testing with $py:" ; $py -c "import urllib3; print(urllib3)" ; done +Depends: python3-all, python3-urllib3 + diff -Nru python-urllib3-1.19.1/dev-requirements.txt python-urllib3-1.21.1/dev-requirements.txt --- python-urllib3-1.19.1/dev-requirements.txt 2016-11-16 09:43:02.000000000 +0000 +++ python-urllib3-1.21.1/dev-requirements.txt 2017-05-02 09:08:45.000000000 +0000 @@ -8,3 +8,4 @@ tornado==4.2.1 PySocks==1.5.6 pkginfo>=1.0,!=1.3.0 +psutil==4.3.1 diff -Nru python-urllib3-1.19.1/docs/advanced-usage.rst python-urllib3-1.21.1/docs/advanced-usage.rst --- python-urllib3-1.19.1/docs/advanced-usage.rst 2016-10-12 16:41:52.000000000 +0000 +++ python-urllib3-1.21.1/docs/advanced-usage.rst 2017-04-25 11:10:19.000000000 +0000 @@ -184,12 +184,6 @@ `This article `_ has more in-depth analysis and explanation. -If you have `homebrew `_, you can configure homebrew Python to -use homebrew's OpenSSL instead of the system OpenSSL:: - - brew install openssl - brew install python --with-brewed-openssl - .. _ssl_warnings: SSL Warnings diff -Nru python-urllib3-1.19.1/dummyserver/certs/cacert.pem python-urllib3-1.21.1/dummyserver/certs/cacert.pem --- python-urllib3-1.19.1/dummyserver/certs/cacert.pem 2015-02-07 19:27:00.000000000 +0000 +++ python-urllib3-1.21.1/dummyserver/certs/cacert.pem 2017-04-25 11:28:36.000000000 +0000 @@ -1,23 +1,23 @@ -----BEGIN CERTIFICATE----- MIIDzDCCAzWgAwIBAgIJALPrscov4b/jMA0GCSqGSIb3DQEBBQUAMIGBMQswCQYD -VQQGEwJGSTEOMAwGA1UECBMFZHVtbXkxDjAMBgNVBAcTBWR1bW15MQ4wDAYDVQQK -EwVkdW1teTEOMAwGA1UECxMFZHVtbXkxETAPBgNVBAMTCFNuYWtlT2lsMR8wHQYJ +VQQGEwJGSTEOMAwGA1UECAwFZHVtbXkxDjAMBgNVBAcMBWR1bW15MQ4wDAYDVQQK +DAVkdW1teTEOMAwGA1UECwwFZHVtbXkxETAPBgNVBAMMCFNuYWtlT2lsMR8wHQYJ KoZIhvcNAQkBFhBkdW1teUB0ZXN0LmxvY2FsMB4XDTExMTIyMjA3NTYxNVoXDTIx -MTIxOTA3NTYxNVowgYExCzAJBgNVBAYTAkZJMQ4wDAYDVQQIEwVkdW1teTEOMAwG -A1UEBxMFZHVtbXkxDjAMBgNVBAoTBWR1bW15MQ4wDAYDVQQLEwVkdW1teTERMA8G -A1UEAxMIU25ha2VPaWwxHzAdBgkqhkiG9w0BCQEWEGR1bW15QHRlc3QubG9jYWww +MTIxOTA3NTYxNVowgYExCzAJBgNVBAYTAkZJMQ4wDAYDVQQIDAVkdW1teTEOMAwG +A1UEBwwFZHVtbXkxDjAMBgNVBAoMBWR1bW15MQ4wDAYDVQQLDAVkdW1teTERMA8G +A1UEAwwIU25ha2VPaWwxHzAdBgkqhkiG9w0BCQEWEGR1bW15QHRlc3QubG9jYWww gZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMrPxr1fZJ82az1N9/I1oU78rjZ8 CNQjV0AzUbxNWiPRrzVrLtbPhHtXXN+NcVP9ahFbThjrF6TRt9/Q62xb4CuKihTL v6k9ietyGkBbSnuE+MfUMgFVpvTUIkyFDbh6v3ZDV0XhYG/jIqoRpXUhjPVy+q8I ImABuxafUjwKdrWXAgMBAAGjggFIMIIBRDAdBgNVHQ4EFgQUGXd/I2JiQllF+3Wd x3NyBLszCi0wgbYGA1UdIwSBrjCBq4AUGXd/I2JiQllF+3Wdx3NyBLszCi2hgYek -gYQwgYExCzAJBgNVBAYTAkZJMQ4wDAYDVQQIEwVkdW1teTEOMAwGA1UEBxMFZHVt -bXkxDjAMBgNVBAoTBWR1bW15MQ4wDAYDVQQLEwVkdW1teTERMA8GA1UEAxMIU25h +gYQwgYExCzAJBgNVBAYTAkZJMQ4wDAYDVQQIDAVkdW1teTEOMAwGA1UEBwwFZHVt +bXkxDjAMBgNVBAoMBWR1bW15MQ4wDAYDVQQLDAVkdW1teTERMA8GA1UEAwwIU25h a2VPaWwxHzAdBgkqhkiG9w0BCQEWEGR1bW15QHRlc3QubG9jYWyCCQCz67HKL+G/ 4zAPBgNVHRMBAf8EBTADAQH/MBEGCWCGSAGG+EIBAQQEAwIBBjAJBgNVHRIEAjAA MCsGCWCGSAGG+EIBDQQeFhxUaW55Q0EgR2VuZXJhdGVkIENlcnRpZmljYXRlMA4G -A1UdDwEB/wQEAwICBDANBgkqhkiG9w0BAQUFAAOBgQBnnwtO8onsyhGOvS6cS8af -IRZyAXgouuPeP3Zrf5W80iZcV23u94969sPEIsD8Ujv5u0hUSrToGl4ahOMEOFNL -R5ndQOkh3VsepJnoE+RklZzbHWxU8onWlVzsNBFbclxidzaU3UHmdgXJAJL5nVSd -Zpn44QSS0UXsaC0mBimVNw== +A1UdDwEB/wQEAwICBDANBgkqhkiG9w0BAQUFAAOBgQBvz3AlIM1x7CMmwkmhLV6+ +PJkMnPW7XbP+cDYUlddCk7XhIDY4486JxqZegMTWgbUt0AgXYfHLFsTqUJXrnLj2 +WqLb3KP2D1HvnvxJjdJV3M6+TP7tGiY4ICi0zff96FG5C2w9Avsozhr3xDFtjKBv +gyA6UdP3oZGN93oOFiMJXg== -----END CERTIFICATE----- diff -Nru python-urllib3-1.19.1/dummyserver/certs/server.combined.pem python-urllib3-1.21.1/dummyserver/certs/server.combined.pem --- python-urllib3-1.19.1/dummyserver/certs/server.combined.pem 1970-01-01 00:00:00.000000000 +0000 +++ python-urllib3-1.21.1/dummyserver/certs/server.combined.pem 2017-04-25 11:28:36.000000000 +0000 @@ -0,0 +1,36 @@ +-----BEGIN CERTIFICATE----- +MIIDczCCAtygAwIBAgIBATANBgkqhkiG9w0BAQUFADCBgTELMAkGA1UEBhMCRkkx +DjAMBgNVBAgMBWR1bW15MQ4wDAYDVQQHDAVkdW1teTEOMAwGA1UECgwFZHVtbXkx +DjAMBgNVBAsMBWR1bW15MREwDwYDVQQDDAhTbmFrZU9pbDEfMB0GCSqGSIb3DQEJ +ARYQZHVtbXlAdGVzdC5sb2NhbDAeFw0xMTEyMjIwNzU4NDBaFw0yMTEyMTgwNzU4 +NDBaMGExCzAJBgNVBAYTAkZJMQ4wDAYDVQQIDAVkdW1teTEOMAwGA1UEBwwFZHVt +bXkxDjAMBgNVBAoMBWR1bW15MQ4wDAYDVQQLDAVkdW1teTESMBAGA1UEAwwJbG9j +YWxob3N0MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDXe3FqmCWvP8XPxqtT ++0bfL1Tvzvebi46k0WIcUV8bP3vyYiSRXG9ALmyzZH4GHY9UVs4OEDkCMDOBSezB +0y9ai/9doTNcaictdEBu8nfdXKoTtzrn+VX4UPrkH5hm7NQ1fTQuj1MR7yBCmYqN +3Q2Q+Efuujyx0FwBzAuy1aKYuwIDAQABo4IBGDCCARQwCQYDVR0TBAIwADAdBgNV +HQ4EFgQUG+dK5Uos08QUwAWofDb3a8YcYlIwgbYGA1UdIwSBrjCBq4AUGXd/I2Ji +QllF+3Wdx3NyBLszCi2hgYekgYQwgYExCzAJBgNVBAYTAkZJMQ4wDAYDVQQIDAVk +dW1teTEOMAwGA1UEBwwFZHVtbXkxDjAMBgNVBAoMBWR1bW15MQ4wDAYDVQQLDAVk +dW1teTERMA8GA1UEAwwIU25ha2VPaWwxHzAdBgkqhkiG9w0BCQEWEGR1bW15QHRl +c3QubG9jYWyCCQCz67HKL+G/4zAJBgNVHRIEAjAAMCQGA1UdEQQdMBuBDnJvb3RA +bG9jYWxob3N0gglsb2NhbGhvc3QwDQYJKoZIhvcNAQEFBQADgYEAgcW6X1ZUyufm +TFEqEAdpKXdL0rxDwcsM/qqqsXbkz17otH6ujPhBEagzdKtgeNKfy0aXz6rWZugk +lF0IqyC4mcI+vvfgGR5Iy4KdXMrIX98MbrvGJBfbdKhGW2b84wDV42DIDiD2ZGGe +6YZQQIo9LxjuOTf9jsvf+PIkbI4H0To= +-----END CERTIFICATE----- +-----BEGIN RSA PRIVATE KEY----- +MIICXgIBAAKBgQDXe3FqmCWvP8XPxqtT+0bfL1Tvzvebi46k0WIcUV8bP3vyYiSR +XG9ALmyzZH4GHY9UVs4OEDkCMDOBSezB0y9ai/9doTNcaictdEBu8nfdXKoTtzrn ++VX4UPrkH5hm7NQ1fTQuj1MR7yBCmYqN3Q2Q+Efuujyx0FwBzAuy1aKYuwIDAQAB +AoGBANOGBM6bbhq7ImYU4qf8+RQrdVg2tc9Fzo+yTnn30sF/rx8/AiCDOV4qdGAh +HKjKKaGj2H/rotqoEFcxBy05LrgJXxydBP72e9PYhNgKOcSmCQu4yALIPEXfKuIM +zgAErHVJ2l79fif3D4hzNyz+u5E1A9n3FG9cgaJSiYP8IG2RAkEA82GZ8rBkSGQQ +ZQ3oFuzPAAL21lbj8D0p76fsCpvS7427DtZDOjhOIKZmaeykpv+qSzRraqEqjDRi +S4kjQvwh6QJBAOKniZ+NDo2lSpbOFk+XlmABK1DormVpj8KebHEZYok1lRI+WiX9 +Nnoe9YLgix7++6H5SBBCcTB4HvM+5A4BuwMCQQChcX/eZbXP81iQwB3Rfzp8xnqY +icDf7qKvz9Ma4myU7Y5E9EpaB1mD/P14jDpYcMW050vNyqTfpiwB8TFL0NZpAkEA +02jkFH9UyMgZV6qo4tqI98l/ZrtyF8OrxSNSEPhVkZf6EQc5vN9/lc8Uv1vESEgb +3AwRrKDcxRH2BHtv6qSwkwJAGjqnkIcEkA75r1e55/EF2chcZW1+tpwKupE8CtAH +VXGd5DVwt4cYWkLUj2gF2fJbV97uu2MAg5CFDb+vQ6p5eA== +-----END RSA PRIVATE KEY----- Binary files /tmp/tmpfJsv__/ycvQZYTnmh/python-urllib3-1.19.1/dummyserver/.DS_Store and /tmp/tmpfJsv__/fxZCJuCMhU/python-urllib3-1.21.1/dummyserver/.DS_Store differ diff -Nru python-urllib3-1.19.1/dummyserver/handlers.py python-urllib3-1.21.1/dummyserver/handlers.py --- python-urllib3-1.19.1/dummyserver/handlers.py 2016-11-16 10:03:57.000000000 +0000 +++ python-urllib3-1.21.1/dummyserver/handlers.py 2017-01-19 09:51:46.000000000 +0000 @@ -158,8 +158,12 @@ def redirect(self, request): "Perform a redirect to ``target``" target = request.params.get('target', '/') + status = request.params.get('status', '303 See Other') + if len(status) == 3: + status = '%s Redirect' % status.decode('latin-1') + headers = [('Location', target)] - return Response(status='303 See Other', headers=headers) + return Response(status=status, headers=headers) def multi_redirect(self, request): "Performs a redirect chain based on ``redirect_codes``" diff -Nru python-urllib3-1.19.1/dummyserver/server.py python-urllib3-1.21.1/dummyserver/server.py --- python-urllib3-1.19.1/dummyserver/server.py 2016-11-16 10:03:57.000000000 +0000 +++ python-urllib3-1.21.1/dummyserver/server.py 2017-04-25 11:28:36.000000000 +0000 @@ -48,6 +48,7 @@ NO_SAN_CA = os.path.join(CERTS_PATH, 'cacert.no_san.pem') DEFAULT_CA_DIR = os.path.join(CERTS_PATH, 'ca_path_test') IPV6_ADDR_CA = os.path.join(CERTS_PATH, 'server.ipv6addr.crt') +COMBINED_CERT_AND_KEY = os.path.join(CERTS_PATH, 'server.combined.pem') def _has_ipv6(host): @@ -120,7 +121,7 @@ self.port = sock.getsockname()[1] # Once listen() returns, the server socket is ready - sock.listen(0) + sock.listen(1) if self.ready_event: self.ready_event.set() @@ -210,7 +211,9 @@ def run_tornado_app(app, io_loop, certs, scheme, host): - app.last_req = datetime.fromtimestamp(0) + # We can't use fromtimestamp(0) because of CPython issue 29097, so we'll + # just construct the datetime object directly. + app.last_req = datetime(1970, 1, 1) if scheme == 'https': http_server = tornado.httpserver.HTTPServer(app, ssl_options=certs, diff -Nru python-urllib3-1.19.1/dummyserver/testcase.py python-urllib3-1.21.1/dummyserver/testcase.py --- python-urllib3-1.19.1/dummyserver/testcase.py 2016-09-12 09:02:32.000000000 +0000 +++ python-urllib3-1.21.1/dummyserver/testcase.py 2017-02-01 14:04:00.000000000 +0000 @@ -1,4 +1,4 @@ -import unittest +import sys import socket import threading from nose.plugins.skip import SkipTest @@ -13,6 +13,11 @@ from dummyserver.handlers import TestingApp from dummyserver.proxy import ProxyHandler +if sys.version_info >= (2, 7): + import unittest +else: + import unittest2 as unittest + def consume_socket(sock, chunks=65536): while not sock.recv(chunks).endswith(b'\r\n\r\n'): diff -Nru python-urllib3-1.19.1/PKG-INFO python-urllib3-1.21.1/PKG-INFO --- python-urllib3-1.19.1/PKG-INFO 2016-11-16 10:12:35.000000000 +0000 +++ python-urllib3-1.21.1/PKG-INFO 2017-05-02 10:57:45.000000000 +0000 @@ -1,6 +1,6 @@ Metadata-Version: 1.1 Name: urllib3 -Version: 1.19.1 +Version: 1.21.1 Summary: HTTP library with thread-safe connection pooling, file post, and more. Home-page: https://urllib3.readthedocs.io/ Author: Andrey Petrov @@ -9,13 +9,21 @@ Description: urllib3 ======= - .. image:: https://travis-ci.org/shazow/urllib3.png?branch=master + .. image:: https://travis-ci.org/shazow/urllib3.svg?branch=master :alt: Build status on Travis :target: https://travis-ci.org/shazow/urllib3 + .. image:: https://img.shields.io/appveyor/ci/shazow/urllib3/master.svg + :alt: Build status on AppVeyor + :target: https://ci.appveyor.com/project/shazow/urllib3 + .. image:: https://readthedocs.org/projects/urllib3/badge/?version=latest :alt: Documentation Status :target: https://urllib3.readthedocs.io/en/latest/ + + .. image:: https://img.shields.io/codecov/c/github/shazow/urllib3.svg + :alt: Coverage Status + :target: https://codecov.io/gh/shazow/urllib3 .. image:: https://img.shields.io/pypi/v/urllib3.svg?maxAge=86400 :alt: PyPI version @@ -94,11 +102,96 @@ Changes ======= + 1.21.1 (2017-05-02) + ------------------- + + * Fixed SecureTransport issue that would cause long delays in response body + delivery. (Pull #1154) + + * Fixed regression in 1.21 that threw exceptions when users passed the + ``socket_options`` flag to the ``PoolManager``. (Issue #1165) + + * Fixed regression in 1.21 that threw exceptions when users passed the + ``assert_hostname`` or ``assert_fingerprint`` flag to the ``PoolManager``. + (Pull #1157) + + + 1.21 (2017-04-25) + ----------------- + + * Improved performance of certain selector system calls on Python 3.5 and + later. (Pull #1095) + + * Resolved issue where the PyOpenSSL backend would not wrap SysCallError + exceptions appropriately when sending data. (Pull #1125) + + * Selectors now detects a monkey-patched select module after import for modules + that patch the select module like eventlet, greenlet. (Pull #1128) + + * Reduced memory consumption when streaming zlib-compressed responses + (as opposed to raw deflate streams). (Pull #1129) + + * Connection pools now use the entire request context when constructing the + pool key. (Pull #1016) + + * ``PoolManager.connection_from_*`` methods now accept a new keyword argument, + ``pool_kwargs``, which are merged with the existing ``connection_pool_kw``. + (Pull #1016) + + * Add retry counter for ``status_forcelist``. (Issue #1147) + + * Added ``contrib`` module for using SecureTransport on macOS: + ``urllib3.contrib.securetransport``. (Pull #1122) + + * urllib3 now only normalizes the case of ``http://`` and ``https://`` schemes: + for schemes it does not recognise, it assumes they are case-sensitive and + leaves them unchanged. + (Issue #1080) + + + 1.20 (2017-01-19) + ----------------- + + * Added support for waiting for I/O using selectors other than select, + improving urllib3's behaviour with large numbers of concurrent connections. + (Pull #1001) + + * Updated the date for the system clock check. (Issue #1005) + + * ConnectionPools now correctly consider hostnames to be case-insensitive. + (Issue #1032) + + * Outdated versions of PyOpenSSL now cause the PyOpenSSL contrib module + to fail when it is injected, rather than at first use. (Pull #1063) + + * Outdated versions of cryptography now cause the PyOpenSSL contrib module + to fail when it is injected, rather than at first use. (Issue #1044) + + * Automatically attempt to rewind a file-like body object when a request is + retried or redirected. (Pull #1039) + + * Fix some bugs that occur when modules incautiously patch the queue module. + (Pull #1061) + + * Prevent retries from occuring on read timeouts for which the request method + was not in the method whitelist. (Issue #1059) + + * Changed the PyOpenSSL contrib module to lazily load idna to avoid + unnecessarily bloating the memory of programs that don't need it. (Pull + #1076) + + * Add support for IPv6 literals with zone identifiers. (Pull #1013) + + * Added support for socks5h:// and socks4a:// schemes when working with SOCKS + proxies, and controlled remote DNS appropriately. (Issue #1035) + + 1.19.1 (2016-11-16) ------------------- * Fixed AppEngine import that didn't function on Python 3.5. (Pull #1025) + 1.19 (2016-11-03) ----------------- diff -Nru python-urllib3-1.19.1/README.rst python-urllib3-1.21.1/README.rst --- python-urllib3-1.19.1/README.rst 2016-10-12 16:41:52.000000000 +0000 +++ python-urllib3-1.21.1/README.rst 2017-04-25 11:28:36.000000000 +0000 @@ -1,13 +1,21 @@ urllib3 ======= -.. image:: https://travis-ci.org/shazow/urllib3.png?branch=master +.. image:: https://travis-ci.org/shazow/urllib3.svg?branch=master :alt: Build status on Travis :target: https://travis-ci.org/shazow/urllib3 +.. image:: https://img.shields.io/appveyor/ci/shazow/urllib3/master.svg + :alt: Build status on AppVeyor + :target: https://ci.appveyor.com/project/shazow/urllib3 + .. image:: https://readthedocs.org/projects/urllib3/badge/?version=latest :alt: Documentation Status :target: https://urllib3.readthedocs.io/en/latest/ + +.. image:: https://img.shields.io/codecov/c/github/shazow/urllib3.svg + :alt: Coverage Status + :target: https://codecov.io/gh/shazow/urllib3 .. image:: https://img.shields.io/pypi/v/urllib3.svg?maxAge=86400 :alt: PyPI version diff -Nru python-urllib3-1.19.1/setup.cfg python-urllib3-1.21.1/setup.cfg --- python-urllib3-1.19.1/setup.cfg 2016-11-16 10:12:35.000000000 +0000 +++ python-urllib3-1.21.1/setup.cfg 2017-05-02 10:57:45.000000000 +0000 @@ -5,7 +5,7 @@ cover-erase = true [flake8] -exclude = ./docs/conf.py,./test/*,./urllib3/packages/* +exclude = ./docs/conf.py,./urllib3/packages/* max-line-length = 99 [wheel] @@ -26,5 +26,4 @@ [egg_info] tag_build = tag_date = 0 -tag_svn_revision = 0 diff -Nru python-urllib3-1.19.1/setup.py python-urllib3-1.21.1/setup.py --- python-urllib3-1.19.1/setup.py 2016-11-03 15:16:16.000000000 +0000 +++ python-urllib3-1.21.1/setup.py 2017-04-25 11:28:36.000000000 +0000 @@ -42,7 +42,7 @@ packages=['urllib3', 'urllib3.packages', 'urllib3.packages.ssl_match_hostname', 'urllib3.packages.backports', 'urllib3.contrib', - 'urllib3.util', + 'urllib3.contrib._securetransport', 'urllib3.util', ], requires=[], tests_require=[ diff -Nru python-urllib3-1.19.1/test/benchmark.py python-urllib3-1.21.1/test/benchmark.py --- python-urllib3-1.19.1/test/benchmark.py 2015-02-07 19:27:17.000000000 +0000 +++ python-urllib3-1.21.1/test/benchmark.py 2017-04-25 11:10:19.000000000 +0000 @@ -11,7 +11,7 @@ import urllib sys.path.append('../') -import urllib3 +import urllib3 # noqa: E402 # URLs to download. Doesn't matter as long as they're from the same host, so we @@ -39,7 +39,7 @@ assert url_list for url in url_list: now = time.time() - r = urllib.urlopen(url) + urllib.urlopen(url) elapsed = time.time() - now print("Got in %0.3f: %s" % (elapsed, url)) @@ -49,7 +49,7 @@ pool = urllib3.PoolManager() for url in url_list: now = time.time() - r = pool.request('GET', url, assert_same_host=False) + pool.request('GET', url, assert_same_host=False) elapsed = time.time() - now print("Got in %0.3fs: %s" % (elapsed, url)) diff -Nru python-urllib3-1.19.1/test/contrib/test_gae_manager.py python-urllib3-1.21.1/test/contrib/test_gae_manager.py --- python-urllib3-1.19.1/test/contrib/test_gae_manager.py 2016-11-03 15:16:16.000000000 +0000 +++ python-urllib3-1.21.1/test/contrib/test_gae_manager.py 2017-04-25 11:10:19.000000000 +0000 @@ -190,12 +190,13 @@ self.pool._absolute_url('/successful_retry'), None, 418, None),)) - #test_max_retry = None - #test_disabled_retry = None + # test_max_retry = None + # test_disabled_retry = None # We don't need these tests because URLFetch resolves its own redirects. test_retry_redirect_history = None test_multi_redirect_history = None + class TestGAERetryAfter(TestRetryAfter): __test__ = True diff -Nru python-urllib3-1.19.1/test/contrib/test_pyopenssl_dependencies.py python-urllib3-1.21.1/test/contrib/test_pyopenssl_dependencies.py --- python-urllib3-1.19.1/test/contrib/test_pyopenssl_dependencies.py 1970-01-01 00:00:00.000000000 +0000 +++ python-urllib3-1.21.1/test/contrib/test_pyopenssl_dependencies.py 2017-04-25 11:10:19.000000000 +0000 @@ -0,0 +1,46 @@ +# -*- coding: utf-8 -*- +import unittest + +from nose.plugins.skip import SkipTest + +try: + from urllib3.contrib.pyopenssl import (inject_into_urllib3, + extract_from_urllib3) +except ImportError as e: + raise SkipTest('Could not import PyOpenSSL: %r' % e) + +from mock import patch, Mock + + +class TestPyOpenSSLInjection(unittest.TestCase): + """ + Tests for error handling in pyopenssl's 'inject_into urllib3' + """ + def test_inject_validate_fail_cryptography(self): + """ + Injection should not be supported if cryptography is too old. + """ + try: + with patch("cryptography.x509.extensions.Extensions") as mock: + del mock.get_extension_for_class + self.assertRaises(ImportError, inject_into_urllib3) + finally: + # `inject_into_urllib3` is not supposed to succeed. + # If it does, this test should fail, but we need to + # clean up so that subsequent tests are unaffected. + extract_from_urllib3() + + def test_inject_validate_fail_pyopenssl(self): + """ + Injection should not be supported if pyOpenSSL is too old. + """ + try: + return_val = Mock() + del return_val._x509 + with patch("OpenSSL.crypto.X509", return_value=return_val): + self.assertRaises(ImportError, inject_into_urllib3) + finally: + # `inject_into_urllib3` is not supposed to succeed. + # If it does, this test should fail, but we need to + # clean up so that subsequent tests are unaffected. + extract_from_urllib3() diff -Nru python-urllib3-1.19.1/test/contrib/test_pyopenssl.py python-urllib3-1.21.1/test/contrib/test_pyopenssl.py --- python-urllib3-1.19.1/test/contrib/test_pyopenssl.py 2016-10-12 16:41:52.000000000 +0000 +++ python-urllib3-1.21.1/test/contrib/test_pyopenssl.py 2017-04-25 11:28:36.000000000 +0000 @@ -2,7 +2,6 @@ import unittest from nose.plugins.skip import SkipTest -from urllib3.packages import six try: from urllib3.contrib.pyopenssl import (inject_into_urllib3, @@ -12,8 +11,10 @@ raise SkipTest('Could not import PyOpenSSL: %r' % e) -from ..with_dummyserver.test_https import TestHTTPS, TestHTTPS_TLSv1 -from ..with_dummyserver.test_socketlevel import TestSNI, TestSocketClosing +from ..with_dummyserver.test_https import TestHTTPS, TestHTTPS_TLSv1 # noqa: F401 +from ..with_dummyserver.test_socketlevel import ( # noqa: F401 + TestSNI, TestSocketClosing, TestClientCerts +) def setup_module(): diff -Nru python-urllib3-1.19.1/test/contrib/test_securetransport.py python-urllib3-1.21.1/test/contrib/test_securetransport.py --- python-urllib3-1.19.1/test/contrib/test_securetransport.py 1970-01-01 00:00:00.000000000 +0000 +++ python-urllib3-1.21.1/test/contrib/test_securetransport.py 2017-04-25 11:28:36.000000000 +0000 @@ -0,0 +1,21 @@ +# -*- coding: utf-8 -*- +from nose.plugins.skip import SkipTest + +try: + from urllib3.contrib.securetransport import (inject_into_urllib3, + extract_from_urllib3) +except ImportError as e: + raise SkipTest('Could not import SecureTransport: %r' % e) + +from ..with_dummyserver.test_https import TestHTTPS, TestHTTPS_TLSv1 # noqa: F401 +from ..with_dummyserver.test_socketlevel import ( # noqa: F401 + TestSNI, TestSocketClosing, TestClientCerts +) + + +def setup_module(): + inject_into_urllib3() + + +def teardown_module(): + extract_from_urllib3() diff -Nru python-urllib3-1.19.1/test/contrib/test_socks.py python-urllib3-1.21.1/test/contrib/test_socks.py --- python-urllib3-1.19.1/test/contrib/test_socks.py 2016-11-03 13:44:18.000000000 +0000 +++ python-urllib3-1.21.1/test/contrib/test_socks.py 2017-05-02 09:08:45.000000000 +0000 @@ -223,12 +223,44 @@ self._start_server(request_handler) proxy_url = "socks5://%s:%s" % (self.host, self.port) pm = socks.SOCKSProxyManager(proxy_url) + self.addCleanup(pm.clear) response = pm.request('GET', 'http://16.17.18.19') self.assertEqual(response.status, 200) self.assertEqual(response.data, b'') self.assertEqual(response.headers['Server'], 'SocksTestServer') + def test_local_dns(self): + def request_handler(listener): + sock = listener.accept()[0] + + handler = handle_socks5_negotiation(sock, negotiate=False) + addr, port = next(handler) + + self.assertIn(addr, ['127.0.0.1', '::1']) + self.assertTrue(port, 80) + handler.send(True) + + while True: + buf = sock.recv(65535) + if buf.endswith(b'\r\n\r\n'): + break + + sock.sendall(b'HTTP/1.1 200 OK\r\n' + b'Server: SocksTestServer\r\n' + b'Content-Length: 0\r\n' + b'\r\n') + sock.close() + + self._start_server(request_handler) + proxy_url = "socks5://%s:%s" % (self.host, self.port) + pm = socks.SOCKSProxyManager(proxy_url) + response = pm.request('GET', 'http://localhost') + + self.assertEqual(response.status, 200) + self.assertEqual(response.data, b'') + self.assertEqual(response.headers['Server'], 'SocksTestServer') + def test_correct_header_line(self): def request_handler(listener): sock = listener.accept()[0] @@ -256,8 +288,9 @@ sock.close() self._start_server(request_handler) - proxy_url = "socks5://%s:%s" % (self.host, self.port) + proxy_url = "socks5h://%s:%s" % (self.host, self.port) pm = socks.SOCKSProxyManager(proxy_url) + self.addCleanup(pm.clear) response = pm.request('GET', 'http://example.com') self.assertEqual(response.status, 200) @@ -268,8 +301,9 @@ event.wait() self._start_server(request_handler) - proxy_url = "socks5://%s:%s" % (self.host, self.port) + proxy_url = "socks5h://%s:%s" % (self.host, self.port) pm = socks.SOCKSProxyManager(proxy_url) + self.addCleanup(pm.clear) self.assertRaises( ConnectTimeoutError, pm.request, 'GET', 'http://example.com', @@ -285,8 +319,9 @@ event.set() self._start_server(request_handler) - proxy_url = "socks5://%s:%s" % (self.host, self.port) + proxy_url = "socks5h://%s:%s" % (self.host, self.port) pm = socks.SOCKSProxyManager(proxy_url) + self.addCleanup(pm.clear) event.wait() self.assertRaises( @@ -308,8 +343,9 @@ sock.close() self._start_server(request_handler) - proxy_url = "socks5://%s:%s" % (self.host, self.port) + proxy_url = "socks5h://%s:%s" % (self.host, self.port) pm = socks.SOCKSProxyManager(proxy_url) + self.addCleanup(pm.clear) self.assertRaises( NewConnectionError, pm.request, 'GET', 'http://example.com', @@ -344,7 +380,9 @@ self._start_server(request_handler) proxy_url = "socks5://%s:%s" % (self.host, self.port) pm = socks.SOCKSProxyManager(proxy_url, username='user', - password='pass') + password='pass') + self.addCleanup(pm.clear) + response = pm.request('GET', 'http://16.17.18.19') self.assertEqual(response.status, 200) @@ -361,9 +399,10 @@ next(handler) self._start_server(request_handler) - proxy_url = "socks5://%s:%s" % (self.host, self.port) + proxy_url = "socks5h://%s:%s" % (self.host, self.port) pm = socks.SOCKSProxyManager(proxy_url, username='user', - password='badpass') + password='badpass') + self.addCleanup(pm.clear) try: pm.request('GET', 'http://example.com', retries=False) @@ -403,6 +442,7 @@ pm = socks.SOCKSProxyManager( proxy_url, source_address=('127.0.0.1', expected_port) ) + self.addCleanup(pm.clear) response = pm.request('GET', 'http://16.17.18.19') self.assertEqual(response.status, 200) @@ -439,12 +479,44 @@ self._start_server(request_handler) proxy_url = "socks4://%s:%s" % (self.host, self.port) pm = socks.SOCKSProxyManager(proxy_url) + self.addCleanup(pm.clear) response = pm.request('GET', 'http://16.17.18.19') self.assertEqual(response.status, 200) self.assertEqual(response.headers['Server'], 'SocksTestServer') self.assertEqual(response.data, b'') + def test_local_dns(self): + def request_handler(listener): + sock = listener.accept()[0] + + handler = handle_socks4_negotiation(sock) + addr, port = next(handler) + + self.assertEqual(addr, '127.0.0.1') + self.assertTrue(port, 80) + handler.send(True) + + while True: + buf = sock.recv(65535) + if buf.endswith(b'\r\n\r\n'): + break + + sock.sendall(b'HTTP/1.1 200 OK\r\n' + b'Server: SocksTestServer\r\n' + b'Content-Length: 0\r\n' + b'\r\n') + sock.close() + + self._start_server(request_handler) + proxy_url = "socks4://%s:%s" % (self.host, self.port) + pm = socks.SOCKSProxyManager(proxy_url) + response = pm.request('GET', 'http://localhost') + + self.assertEqual(response.status, 200) + self.assertEqual(response.headers['Server'], 'SocksTestServer') + self.assertEqual(response.data, b'') + def test_correct_header_line(self): def request_handler(listener): sock = listener.accept()[0] @@ -472,8 +544,9 @@ sock.close() self._start_server(request_handler) - proxy_url = "socks4://%s:%s" % (self.host, self.port) + proxy_url = "socks4a://%s:%s" % (self.host, self.port) pm = socks.SOCKSProxyManager(proxy_url) + self.addCleanup(pm.clear) response = pm.request('GET', 'http://example.com') self.assertEqual(response.status, 200) @@ -491,8 +564,9 @@ sock.close() self._start_server(request_handler) - proxy_url = "socks4://%s:%s" % (self.host, self.port) + proxy_url = "socks4a://%s:%s" % (self.host, self.port) pm = socks.SOCKSProxyManager(proxy_url) + self.addCleanup(pm.clear) self.assertRaises( NewConnectionError, pm.request, 'GET', 'http://example.com', @@ -525,6 +599,7 @@ self._start_server(request_handler) proxy_url = "socks4://%s:%s" % (self.host, self.port) pm = socks.SOCKSProxyManager(proxy_url, username='user') + self.addCleanup(pm.clear) response = pm.request('GET', 'http://16.17.18.19') self.assertEqual(response.status, 200) @@ -539,8 +614,9 @@ next(handler) self._start_server(request_handler) - proxy_url = "socks4://%s:%s" % (self.host, self.port) + proxy_url = "socks4a://%s:%s" % (self.host, self.port) pm = socks.SOCKSProxyManager(proxy_url, username='baduser') + self.addCleanup(pm.clear) try: pm.request('GET', 'http://example.com', retries=False) @@ -584,14 +660,15 @@ self.assertTrue(buf.startswith(b'GET / HTTP/1.1\r\n')) tls.sendall(b'HTTP/1.1 200 OK\r\n' - b'Server: SocksTestServer\r\n' - b'Content-Length: 0\r\n' - b'\r\n') + b'Server: SocksTestServer\r\n' + b'Content-Length: 0\r\n' + b'\r\n') tls.close() self._start_server(request_handler) - proxy_url = "socks5://%s:%s" % (self.host, self.port) + proxy_url = "socks5h://%s:%s" % (self.host, self.port) pm = socks.SOCKSProxyManager(proxy_url) + self.addCleanup(pm.clear) response = pm.request('GET', 'https://localhost') self.assertEqual(response.status, 200) diff -Nru python-urllib3-1.19.1/test/__init__.py python-urllib3-1.21.1/test/__init__.py --- python-urllib3-1.19.1/test/__init__.py 2016-09-12 09:02:32.000000000 +0000 +++ python-urllib3-1.21.1/test/__init__.py 2017-05-02 09:08:45.000000000 +0000 @@ -4,11 +4,13 @@ import functools import logging import socket +import platform from nose.plugins.skip import SkipTest -from urllib3.exceptions import MaxRetryError, HTTPWarning +from urllib3.exceptions import HTTPWarning from urllib3.packages import six +from urllib3.util import ssl_ # We need a host that will not immediately close the connection with a TCP # Reset. SO suggests this hostname @@ -29,10 +31,12 @@ new_filters.append(f) warnings.filters[:] = new_filters + def setUp(): clear_warnings() warnings.simplefilter('ignore', HTTPWarning) + def onlyPy26OrOlder(test): """Skips this test unless you are on Python2.6.x or earlier.""" @@ -44,6 +48,7 @@ return test(*args, **kwargs) return wrapper + def onlyPy27OrNewer(test): """Skips this test unless you are on Python 2.7.x or later.""" @@ -55,6 +60,7 @@ return test(*args, **kwargs) return wrapper + def onlyPy279OrNewer(test): """Skips this test unless you are on Python 2.7.9 or later.""" @@ -66,6 +72,7 @@ return test(*args, **kwargs) return wrapper + def onlyPy2(test): """Skips this test unless you are on Python 2.x""" @@ -77,6 +84,7 @@ return test(*args, **kwargs) return wrapper + def onlyPy3(test): """Skips this test unless you are on Python3.x""" @@ -88,30 +96,66 @@ return test(*args, **kwargs) return wrapper + +def notSecureTransport(test): + """Skips this test when SecureTransport is in use.""" + + @functools.wraps(test) + def wrapper(*args, **kwargs): + msg = "{name} does not run with SecureTransport".format(name=test.__name__) + if ssl_.IS_SECURETRANSPORT: + raise SkipTest(msg) + return test(*args, **kwargs) + return wrapper + + +def onlyPy27OrNewerOrNonWindows(test): + """Skips this test unless you are on Python2.7+ or non-Windows""" + @functools.wraps(test) + def wrapper(*args, **kwargs): + msg = "{name} requires Python2.7+ or non-Windows to run".format(name=test.__name__) + if sys.version_info < (2, 7) and platform.system() == 'Windows': + raise SkipTest(msg) + return test(*args, **kwargs) + return wrapper + + +_requires_network_has_route = None + + def requires_network(test): """Helps you skip tests that require the network""" def _is_unreachable_err(err): return getattr(err, 'errno', None) in (errno.ENETUNREACH, - errno.EHOSTUNREACH) # For OSX + errno.EHOSTUNREACH) # For OSX - @functools.wraps(test) - def wrapper(*args, **kwargs): - msg = "Can't run {name} because the network is unreachable".format( - name=test.__name__) + def _has_route(): try: - return test(*args, **kwargs) + sock = socket.create_connection((TARPIT_HOST, 80), 0.0001) + sock.close() + return True + except socket.timeout: + return True except socket.error as e: - # This test needs an initial network connection to attempt the - # connection to the TARPIT_HOST. This fails if you are in a place - # without an Internet connection, so we skip the test in that case. if _is_unreachable_err(e): - raise SkipTest(msg) - raise - except MaxRetryError as e: - if _is_unreachable_err(e.reason): - raise SkipTest(msg) - raise + return False + else: + raise + + @functools.wraps(test) + def wrapper(*args, **kwargs): + global _requires_network_has_route + + if _requires_network_has_route is None: + _requires_network_has_route = _has_route() + + if _requires_network_has_route: + return test(*args, **kwargs) + else: + msg = "Can't run {name} because the network is unreachable".format( + name=test.__name__) + raise SkipTest(msg) return wrapper diff -Nru python-urllib3-1.19.1/test/port_helpers.py python-urllib3-1.21.1/test/port_helpers.py --- python-urllib3-1.19.1/test/port_helpers.py 2015-02-07 19:27:17.000000000 +0000 +++ python-urllib3-1.21.1/test/port_helpers.py 2017-04-25 11:10:19.000000000 +0000 @@ -9,6 +9,7 @@ HOST = "127.0.0.1" HOSTv6 = "::1" + def find_unused_port(family=socket.AF_INET, socktype=socket.SOCK_STREAM): """Returns an unused port that should be suitable for binding. This is achieved by creating a temporary socket with the same family and type as @@ -69,6 +70,7 @@ del tempsock return port + def bind_port(sock, host=HOST): """Bind the socket to a free port and return the port number. Relies on ephemeral ports in order to ensure we are using an unbound port. This is @@ -86,11 +88,11 @@ if sock.family == socket.AF_INET and sock.type == socket.SOCK_STREAM: if hasattr(socket, 'SO_REUSEADDR'): if sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR) == 1: - raise ValueError("tests should never set the SO_REUSEADDR " \ + raise ValueError("tests should never set the SO_REUSEADDR " "socket option on TCP/IP sockets!") if hasattr(socket, 'SO_REUSEPORT'): if sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT) == 1: - raise ValueError("tests should never set the SO_REUSEPORT " \ + raise ValueError("tests should never set the SO_REUSEPORT " "socket option on TCP/IP sockets!") if hasattr(socket, 'SO_EXCLUSIVEADDRUSE'): sock.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1) diff -Nru python-urllib3-1.19.1/test/socketpair_helper.py python-urllib3-1.21.1/test/socketpair_helper.py --- python-urllib3-1.19.1/test/socketpair_helper.py 1970-01-01 00:00:00.000000000 +0000 +++ python-urllib3-1.21.1/test/socketpair_helper.py 2017-04-25 11:10:19.000000000 +0000 @@ -0,0 +1,62 @@ +import socket + +# Figuring out what errors could come out of a socket. There are three +# different situations. Python 3 post-PEP3151 will define and use +# BlockingIOError and InterruptedError from sockets. For Python pre-PEP3151 +# both OSError and socket.error can be raised except on Windows where +# WindowsError can also be raised. We want to catch all of these possible +# exceptions so we catch WindowsError if it's defined. +try: + _CONNECT_ERROR = (BlockingIOError, InterruptedError) +except NameError: + try: + _CONNECT_ERROR = (WindowsError, OSError, socket.error) # noqa: F821 + except NameError: + _CONNECT_ERROR = (OSError, socket.error) + +if hasattr(socket, 'socketpair'): + # Since Python 3.5, socket.socketpair() is now also available on Windows + socketpair = socket.socketpair +else: + # Replacement for socket.socketpair() + def socketpair(family=socket.AF_INET, type=socket.SOCK_STREAM, proto=0): + """A socket pair usable as a self-pipe, for Windows. + + Origin: https://gist.github.com/4325783, by Geert Jansen. + Public domain. + """ + if family == socket.AF_INET: + host = '127.0.0.1' + elif family == socket.AF_INET6: + host = '::1' + else: + raise ValueError("Only AF_INET and AF_INET6 socket address " + "families are supported") + if type != socket.SOCK_STREAM: + raise ValueError("Only SOCK_STREAM socket type is supported") + if proto != 0: + raise ValueError("Only protocol zero is supported") + + # We create a connected TCP socket. Note the trick with setblocking(0) + # that prevents us from having to create a thread. + lsock = socket.socket(family, type, proto) + try: + lsock.bind((host, 0)) + lsock.listen(1) + # On IPv6, ignore flow_info and scope_id + addr, port = lsock.getsockname()[:2] + csock = socket.socket(family, type, proto) + try: + csock.setblocking(False) + try: + csock.connect((addr, port)) + except _CONNECT_ERROR: + pass + csock.setblocking(True) + ssock, _ = lsock.accept() + except: + csock.close() + raise + finally: + lsock.close() + return (ssock, csock) diff -Nru python-urllib3-1.19.1/test/test_collections.py python-urllib3-1.21.1/test/test_collections.py --- python-urllib3-1.19.1/test/test_collections.py 2016-09-12 09:02:32.000000000 +0000 +++ python-urllib3-1.21.1/test/test_collections.py 2017-04-25 11:10:19.000000000 +0000 @@ -4,11 +4,11 @@ HTTPHeaderDict, RecentlyUsedContainer as Container ) +from nose.plugins.skip import SkipTest + from urllib3.packages import six xrange = six.moves.xrange -from nose.plugins.skip import SkipTest - class TestLRUContainer(unittest.TestCase): def test_maxsize(self): @@ -61,7 +61,7 @@ # Keys should be ordered by access time self.assertEqual(list(d.keys()), [5, 6, 7, 8, 9]) - new_order = [7,8,6,9,5] + new_order = [7, 8, 6, 9, 5] for k in new_order: d[k] @@ -109,7 +109,7 @@ for i in xrange(5): d[i] = i self.assertEqual(list(d.keys()), list(xrange(5))) - self.assertEqual(evicted_items, []) # Nothing disposed + self.assertEqual(evicted_items, []) # Nothing disposed d[5] = 5 self.assertEqual(list(d.keys()), list(xrange(1, 6))) @@ -148,19 +148,20 @@ h = HTTPHeaderDict(ab=1, cd=2, ef=3, gh=4) self.assertEqual(len(h), 4) self.assertTrue('ab' in h) - + def test_create_from_dict(self): h = HTTPHeaderDict(dict(ab=1, cd=2, ef=3, gh=4)) self.assertEqual(len(h), 4) self.assertTrue('ab' in h) - + def test_create_from_iterator(self): teststr = 'urllib3ontherocks' h = HTTPHeaderDict((c, c*5) for c in teststr) self.assertEqual(len(h), len(set(teststr))) - + def test_create_from_list(self): - h = HTTPHeaderDict([('ab', 'A'), ('cd', 'B'), ('cookie', 'C'), ('cookie', 'D'), ('cookie', 'E')]) + headers = [('ab', 'A'), ('cd', 'B'), ('cookie', 'C'), ('cookie', 'D'), ('cookie', 'E')] + h = HTTPHeaderDict(headers) self.assertEqual(len(h), 3) self.assertTrue('ab' in h) clist = h.getlist('cookie') @@ -169,7 +170,8 @@ self.assertEqual(clist[-1], 'E') def test_create_from_headerdict(self): - org = HTTPHeaderDict([('ab', 'A'), ('cd', 'B'), ('cookie', 'C'), ('cookie', 'D'), ('cookie', 'E')]) + headers = [('ab', 'A'), ('cd', 'B'), ('cookie', 'C'), ('cookie', 'D'), ('cookie', 'E')] + org = HTTPHeaderDict(headers) h = HTTPHeaderDict(org) self.assertEqual(len(h), 3) self.assertTrue('ab' in h) @@ -219,7 +221,7 @@ self.assertEqual(self.d['b'], '100') self.d.add('cookie', 'with, comma') self.assertEqual(self.d.getlist('cookie'), ['foo', 'bar', 'asdf', 'with, comma']) - + def test_extend_from_container(self): h = NonMappingHeaderContainer(Cookie='foo', e='foofoo') self.d.extend(h) @@ -299,7 +301,9 @@ def test_dict_conversion(self): # Also tested in connectionpool, needs to preserve case - hdict = {'Content-Length': '0', 'Content-type': 'text/plain', 'Server': 'TornadoServer/1.2.3'} + hdict = {'Content-Length': '0', + 'Content-type': 'text/plain', + 'Server': 'TornadoServer/1.2.3'} h = dict(HTTPHeaderDict(hdict).items()) self.assertEqual(hdict, h) self.assertEqual(hdict, dict(HTTPHeaderDict(hdict))) @@ -339,5 +343,6 @@ self.assertEqual(d['www-authenticate'], 'asdf, bla') self.assertEqual(d.getlist('www-authenticate'), ['asdf', 'bla']) + if __name__ == '__main__': unittest.main() diff -Nru python-urllib3-1.19.1/test/test_compatibility.py python-urllib3-1.21.1/test/test_compatibility.py --- python-urllib3-1.19.1/test/test_compatibility.py 2015-02-07 19:27:17.000000000 +0000 +++ python-urllib3-1.21.1/test/test_compatibility.py 2017-05-02 09:08:45.000000000 +0000 @@ -10,7 +10,7 @@ warnings.simplefilter("always") # strict=True is deprecated in Py33+ - conn = HTTPConnection('localhost', 12345, strict=True) + HTTPConnection('localhost', 12345, strict=True) if w: self.fail('HTTPConnection raised warning on strict=True: %r' % w[0].message) @@ -18,6 +18,6 @@ def test_connection_source_address(self): try: # source_address does not exist in Py26- - conn = HTTPConnection('localhost', 12345, source_address='127.0.0.1') + HTTPConnection('localhost', 12345, source_address='127.0.0.1') except TypeError as e: self.fail('HTTPConnection raised TypeError on source_adddress: %r' % e) diff -Nru python-urllib3-1.19.1/test/test_connectionpool.py python-urllib3-1.21.1/test/test_connectionpool.py --- python-urllib3-1.19.1/test/test_connectionpool.py 2016-11-16 09:43:02.000000000 +0000 +++ python-urllib3-1.21.1/test/test_connectionpool.py 2017-05-02 09:08:45.000000000 +0000 @@ -1,6 +1,6 @@ from __future__ import absolute_import -import unittest +import sys from urllib3.connectionpool import ( connection_from_url, @@ -31,6 +31,11 @@ from dummyserver.server import DEFAULT_CA +if sys.version_info >= (2, 7): + import unittest +else: + import unittest2 as unittest + class TestConnectionPool(unittest.TestCase): """ @@ -49,10 +54,18 @@ ('http://google.com/', 'http://google.com:80/abracadabra'), ('https://google.com:443/', 'https://google.com/abracadabra'), ('https://google.com/', 'https://google.com:443/abracadabra'), + ('http://[2607:f8b0:4005:805::200e%25eth0]/', + 'http://[2607:f8b0:4005:805::200e%eth0]/'), + ('https://[2607:f8b0:4005:805::200e%25eth0]:443/', + 'https://[2607:f8b0:4005:805::200e%eth0]:443/'), + ('http://[::1]/', 'http://[::1]'), + ('http://[2001:558:fc00:200:f816:3eff:fef9:b954%lo]/', + 'http://[2001:558:fc00:200:f816:3eff:fef9:b954%25lo]') ] for a, b in same_host: c = connection_from_url(a) + self.addCleanup(c.close) self.assertTrue(c.is_same_host(b), "%s =? %s" % (a, b)) not_same_host = [ @@ -70,12 +83,17 @@ ('https://google.com:80', 'http://google.com'), ('https://google.com:443', 'http://google.com'), ('http://google.com:80', 'https://google.com'), + # Zone identifiers are unique connection end points and should + # never be equivalent. + ('http://[dead::beef]', 'https://[dead::beef%en5]/'), ] for a, b in not_same_host: c = connection_from_url(a) + self.addCleanup(c.close) self.assertFalse(c.is_same_host(b), "%s =? %s" % (a, b)) c = connection_from_url(b) + self.addCleanup(c.close) self.assertFalse(c.is_same_host(a), "%s =? %s" % (b, a)) def test_same_host_no_port(self): @@ -100,9 +118,11 @@ for a, b in same_host_http: c = HTTPConnectionPool(a) + self.addCleanup(c.close) self.assertTrue(c.is_same_host(b), "%s =? %s" % (a, b)) for a, b in same_host_https: c = HTTPSConnectionPool(a) + self.addCleanup(c.close) self.assertTrue(c.is_same_host(b), "%s =? %s" % (a, b)) not_same_host_http = [ @@ -118,17 +138,22 @@ for a, b in not_same_host_http: c = HTTPConnectionPool(a) + self.addCleanup(c.close) self.assertFalse(c.is_same_host(b), "%s =? %s" % (a, b)) c = HTTPConnectionPool(b) + self.addCleanup(c.close) self.assertFalse(c.is_same_host(a), "%s =? %s" % (b, a)) for a, b in not_same_host_https: c = HTTPSConnectionPool(a) + self.addCleanup(c.close) self.assertFalse(c.is_same_host(b), "%s =? %s" % (a, b)) c = HTTPSConnectionPool(b) + self.addCleanup(c.close) self.assertFalse(c.is_same_host(a), "%s =? %s" % (b, a)) def test_max_connections(self): pool = HTTPConnectionPool(host='localhost', maxsize=1, block=True) + self.addCleanup(pool.close) pool._get_conn(timeout=0.01) @@ -148,12 +173,13 @@ def test_pool_edgecases(self): pool = HTTPConnectionPool(host='localhost', maxsize=1, block=False) + self.addCleanup(pool.close) conn1 = pool._get_conn() - conn2 = pool._get_conn() # New because block=False + conn2 = pool._get_conn() # New because block=False pool._put_conn(conn1) - pool._put_conn(conn2) # Should be discarded + pool._put_conn(conn2) # Should be discarded self.assertEqual(conn1, pool._get_conn()) self.assertNotEqual(conn2, pool._get_conn()) @@ -183,10 +209,10 @@ "Max retries exceeded with url: Test. " "(Caused by %r)" % err) - def test_pool_size(self): POOL_SIZE = 1 pool = HTTPConnectionPool(host='localhost', maxsize=POOL_SIZE, block=True) + self.addCleanup(pool.close) def _raise(ex): raise ex() @@ -213,6 +239,7 @@ def test_assert_same_host(self): c = connection_from_url('http://google.com:80') + self.addCleanup(c.close) self.assertRaises(HostChangedError, c.request, 'GET', 'http://yahoo.com:80', assert_same_host=True) @@ -242,6 +269,7 @@ def test_pool_timeouts(self): pool = HTTPConnectionPool(host='localhost') + self.addCleanup(pool.close) conn = pool._new_conn() self.assertEqual(conn.__class__, HTTPConnection) self.assertEqual(pool.timeout.__class__, Timeout) @@ -277,6 +305,7 @@ def test_absolute_url(self): c = connection_from_url('http://google.com:80') + self.addCleanup(c.close) self.assertEqual( 'http://google.com:80/path?query=foo', c._absolute_url('path?query=foo')) @@ -299,6 +328,7 @@ raise RealBad() c = connection_from_url('http://localhost:80') + self.addCleanup(c.close) c._make_request = kaboom initial_pool_size = c.pool.qsize() @@ -313,7 +343,8 @@ self.assertEqual(initial_pool_size, new_pool_size) def test_release_conn_param_is_respected_after_http_error_retry(self): - """For successful ```urlopen(release_conn=False)```, the connection isn't released, even after a retry. + """For successful ```urlopen(release_conn=False)```, + the connection isn't released, even after a retry. This is a regression test for issue #651 [1], where the connection would be released if the initial request failed, even if a retry @@ -343,6 +374,7 @@ def _test(exception): pool = HTTPConnectionPool(host='localhost', maxsize=1, block=True) + self.addCleanup(pool.close) # Verify that the request succeeds after two attempts, and that the # connection is left on the response object, instead of being @@ -380,6 +412,7 @@ return httplib_response pool = CustomConnectionPool(host='localhost', maxsize=1, block=True) + self.addCleanup(pool.close) response = pool.request('GET', '/', retries=False, chunked=True, preload_content=False) self.assertTrue(isinstance(response, CustomHTTPResponse)) diff -Nru python-urllib3-1.19.1/test/test_connection.py python-urllib3-1.21.1/test/test_connection.py --- python-urllib3-1.19.1/test/test_connection.py 2016-11-16 09:43:02.000000000 +0000 +++ python-urllib3-1.21.1/test/test_connection.py 2017-05-02 09:08:45.000000000 +0000 @@ -1,13 +1,18 @@ -import unittest - +import datetime import mock +import sys from urllib3.connection import ( CertificateError, - VerifiedHTTPSConnection, _match_hostname, + RECENT_DATE ) +if sys.version_info >= (2, 7): + import unittest +else: + import unittest2 as unittest + class TestConnection(unittest.TestCase): """ @@ -43,6 +48,14 @@ ) self.assertEqual(e._peer_cert, cert) + def test_recent_date(self): + # This test is to make sure that the RECENT_DATE value + # doesn't get too far behind what the current date is. + # When this test fails update urllib3.connection.RECENT_DATE + # according to the rules defined in that file. + two_years = datetime.timedelta(days=365 * 2) + self.assertGreater(RECENT_DATE, (datetime.datetime.today() - two_years).date()) + if __name__ == '__main__': unittest.main() diff -Nru python-urllib3-1.19.1/test/test_exceptions.py python-urllib3-1.21.1/test/test_exceptions.py --- python-urllib3-1.19.1/test/test_exceptions.py 2016-09-12 09:02:32.000000000 +0000 +++ python-urllib3-1.21.1/test/test_exceptions.py 2017-05-02 09:08:45.000000000 +0000 @@ -8,7 +8,6 @@ from urllib3.connectionpool import HTTPConnectionPool - class TestPickle(unittest.TestCase): def verify_pickling(self, item): diff -Nru python-urllib3-1.19.1/test/test_fields.py python-urllib3-1.21.1/test/test_fields.py --- python-urllib3-1.19.1/test/test_fields.py 2016-11-03 15:16:16.000000000 +0000 +++ python-urllib3-1.21.1/test/test_fields.py 2017-04-25 11:10:19.000000000 +0000 @@ -1,7 +1,7 @@ import unittest from urllib3.fields import guess_content_type, RequestField -from urllib3.packages.six import u, PY3 +from urllib3.packages.six import u from . import onlyPy2 diff -Nru python-urllib3-1.19.1/test/test_filepost.py python-urllib3-1.21.1/test/test_filepost.py --- python-urllib3-1.19.1/test/test_filepost.py 2015-02-07 19:27:17.000000000 +0000 +++ python-urllib3-1.21.1/test/test_filepost.py 2017-04-25 11:10:19.000000000 +0000 @@ -39,7 +39,6 @@ encoded, _ = encode_multipart_formdata(fields, boundary=BOUNDARY) self.assertEqual(encoded.count(b(BOUNDARY)), 3) - def test_field_encoding(self): fieldsets = [ [('k', 'v'), ('k2', 'v2')], @@ -49,85 +48,79 @@ for fields in fieldsets: encoded, content_type = encode_multipart_formdata(fields, boundary=BOUNDARY) + expected = (b'--' + b(BOUNDARY) + b'\r\n' + b'Content-Disposition: form-data; name="k"\r\n' + b'\r\n' + b'v\r\n' + b'--' + b(BOUNDARY) + b'\r\n' + b'Content-Disposition: form-data; name="k2"\r\n' + b'\r\n' + b'v2\r\n' + b'--' + b(BOUNDARY) + b'--\r\n') - self.assertEqual(encoded, - b'--' + b(BOUNDARY) + b'\r\n' - b'Content-Disposition: form-data; name="k"\r\n' - b'\r\n' - b'v\r\n' - b'--' + b(BOUNDARY) + b'\r\n' - b'Content-Disposition: form-data; name="k2"\r\n' - b'\r\n' - b'v2\r\n' - b'--' + b(BOUNDARY) + b'--\r\n' - , fields) + self.assertEqual(encoded, expected, fields) self.assertEqual(content_type, - 'multipart/form-data; boundary=' + str(BOUNDARY)) - + 'multipart/form-data; boundary=' + str(BOUNDARY)) def test_filename(self): fields = [('k', ('somename', b'v'))] encoded, content_type = encode_multipart_formdata(fields, boundary=BOUNDARY) + expected = (b'--' + b(BOUNDARY) + b'\r\n' + b'Content-Disposition: form-data; name="k"; filename="somename"\r\n' + b'Content-Type: application/octet-stream\r\n' + b'\r\n' + b'v\r\n' + b'--' + b(BOUNDARY) + b'--\r\n') - self.assertEqual(encoded, - b'--' + b(BOUNDARY) + b'\r\n' - b'Content-Disposition: form-data; name="k"; filename="somename"\r\n' - b'Content-Type: application/octet-stream\r\n' - b'\r\n' - b'v\r\n' - b'--' + b(BOUNDARY) + b'--\r\n' - ) + self.assertEqual(encoded, expected) self.assertEqual(content_type, - 'multipart/form-data; boundary=' + str(BOUNDARY)) - + 'multipart/form-data; boundary=' + str(BOUNDARY)) def test_textplain(self): fields = [('k', ('somefile.txt', b'v'))] encoded, content_type = encode_multipart_formdata(fields, boundary=BOUNDARY) + expected = (b'--' + b(BOUNDARY) + b'\r\n' + b'Content-Disposition: form-data; name="k"; filename="somefile.txt"\r\n' + b'Content-Type: text/plain\r\n' + b'\r\n' + b'v\r\n' + b'--' + b(BOUNDARY) + b'--\r\n') - self.assertEqual(encoded, - b'--' + b(BOUNDARY) + b'\r\n' - b'Content-Disposition: form-data; name="k"; filename="somefile.txt"\r\n' - b'Content-Type: text/plain\r\n' - b'\r\n' - b'v\r\n' - b'--' + b(BOUNDARY) + b'--\r\n' - ) + self.assertEqual(encoded, expected) self.assertEqual(content_type, - 'multipart/form-data; boundary=' + str(BOUNDARY)) - + 'multipart/form-data; boundary=' + str(BOUNDARY)) def test_explicit(self): fields = [('k', ('somefile.txt', b'v', 'image/jpeg'))] encoded, content_type = encode_multipart_formdata(fields, boundary=BOUNDARY) + expected = (b'--' + b(BOUNDARY) + b'\r\n' + b'Content-Disposition: form-data; name="k"; filename="somefile.txt"\r\n' + b'Content-Type: image/jpeg\r\n' + b'\r\n' + b'v\r\n' + b'--' + b(BOUNDARY) + b'--\r\n') - self.assertEqual(encoded, - b'--' + b(BOUNDARY) + b'\r\n' - b'Content-Disposition: form-data; name="k"; filename="somefile.txt"\r\n' - b'Content-Type: image/jpeg\r\n' - b'\r\n' - b'v\r\n' - b'--' + b(BOUNDARY) + b'--\r\n' - ) + self.assertEqual(encoded, expected) self.assertEqual(content_type, - 'multipart/form-data; boundary=' + str(BOUNDARY)) + 'multipart/form-data; boundary=' + str(BOUNDARY)) def test_request_fields(self): - fields = [RequestField('k', b'v', filename='somefile.txt', headers={'Content-Type': 'image/jpeg'})] + fields = [RequestField('k', b'v', + filename='somefile.txt', + headers={'Content-Type': 'image/jpeg'})] - encoded, content_type = encode_multipart_formdata(fields, boundary=BOUNDARY) + encoded, content_type = encode_multipart_formdata(fields, boundary=BOUNDARY) + expected = (b'--' + b(BOUNDARY) + b'\r\n' + b'Content-Type: image/jpeg\r\n' + b'\r\n' + b'v\r\n' + b'--' + b(BOUNDARY) + b'--\r\n') - self.assertEqual(encoded, - b'--' + b(BOUNDARY) + b'\r\n' - b'Content-Type: image/jpeg\r\n' - b'\r\n' - b'v\r\n' - b'--' + b(BOUNDARY) + b'--\r\n' - ) + self.assertEqual(encoded, expected) diff -Nru python-urllib3-1.19.1/test/test_no_ssl.py python-urllib3-1.21.1/test/test_no_ssl.py --- python-urllib3-1.19.1/test/test_no_ssl.py 2015-10-27 03:22:40.000000000 +0000 +++ python-urllib3-1.21.1/test/test_no_ssl.py 2017-04-25 11:10:19.000000000 +0000 @@ -6,7 +6,10 @@ """ import sys -import unittest +if sys.version_info >= (2, 7): + import unittest +else: + import unittest2 as unittest class ImportBlocker(object): @@ -81,9 +84,9 @@ # importlib. # 'import' inside 'lambda' is invalid syntax. def import_ssl(): - import ssl + import ssl # noqa: F401 self.assertRaises(ImportError, import_ssl) def test_import_urllib3(self): - import urllib3 + import urllib3 # noqa: F401 diff -Nru python-urllib3-1.19.1/test/test_poolmanager.py python-urllib3-1.21.1/test/test_poolmanager.py --- python-urllib3-1.19.1/test/test_poolmanager.py 2016-09-17 14:47:25.000000000 +0000 +++ python-urllib3-1.21.1/test/test_poolmanager.py 2017-05-02 10:56:31.000000000 +0000 @@ -1,14 +1,10 @@ -import functools -import unittest -from collections import namedtuple +import socket +import sys from urllib3.poolmanager import ( - _default_key_normalizer, - HTTPPoolKey, - HTTPSPoolKey, + PoolKey, key_fn_by_scheme, PoolManager, - SSL_KEYWORDS, ) from urllib3 import connection_from_url from urllib3.exceptions import ( @@ -17,17 +13,25 @@ ) from urllib3.util import retry, timeout +if sys.version_info >= (2, 7): + import unittest +else: + import unittest2 as unittest + class TestPoolManager(unittest.TestCase): def test_same_url(self): # Convince ourselves that normally we don't get the same object conn1 = connection_from_url('http://localhost:8081/foo') conn2 = connection_from_url('http://localhost:8081/bar') + self.addCleanup(conn1.close) + self.addCleanup(conn2.close) self.assertNotEqual(conn1, conn2) # Now try again using the PoolManager p = PoolManager(1) + self.addCleanup(p.clear) conn1 = p.connection_from_url('http://localhost:8081/foo') conn2 = p.connection_from_url('http://localhost:8081/bar') @@ -49,6 +53,7 @@ connections = set() p = PoolManager(10) + self.addCleanup(p.clear) for url in urls: conn = p.connection_from_url(url) @@ -58,6 +63,7 @@ def test_manager_clear(self): p = PoolManager(5) + self.addCleanup(p.clear) conn_pool = p.connection_from_url('http://google.com') self.assertEqual(len(p.pools), 1) @@ -75,9 +81,9 @@ self.assertEqual(len(p.pools), 0) - def test_nohost(self): p = PoolManager(5) + self.addCleanup(p.clear) self.assertRaises(LocationValueError, p.connection_from_url, 'http://@') self.assertRaises(LocationValueError, p.connection_from_url, None) @@ -107,6 +113,7 @@ 'source_address': '127.0.0.1', } p = PoolManager() + self.addCleanup(p.clear) conn_pools = [ p.connection_from_url('http://example.com/'), p.connection_from_url('http://example.com:8000/'), @@ -127,29 +134,10 @@ ) self.assertTrue( all( - isinstance(key, HTTPPoolKey) + isinstance(key, PoolKey) for key in p.pools.keys()) ) - def test_http_pool_key_extra_kwargs(self): - """Assert non-HTTPPoolKey fields are ignored when selecting a pool.""" - p = PoolManager() - conn_pool = p.connection_from_url('http://example.com/') - p.connection_pool_kw['some_kwarg'] = 'that should be ignored' - other_conn_pool = p.connection_from_url('http://example.com/') - - self.assertTrue(conn_pool is other_conn_pool) - - def test_http_pool_key_https_kwargs(self): - """Assert HTTPSPoolKey fields are ignored when selecting a HTTP pool.""" - p = PoolManager() - conn_pool = p.connection_from_url('http://example.com/') - for key in SSL_KEYWORDS: - p.connection_pool_kw[key] = 'this should be ignored' - other_conn_pool = p.connection_from_url('http://example.com/') - - self.assertTrue(conn_pool is other_conn_pool) - def test_https_pool_key_fields(self): """Assert the HTTPSPoolKey fields are honored when selecting a pool.""" connection_pool_kw = { @@ -165,6 +153,7 @@ 'ssl_version': 'SSLv23_METHOD', } p = PoolManager() + self.addCleanup(p.clear) conn_pools = [ p.connection_from_url('https://example.com/'), p.connection_from_url('https://example.com:4333/'), @@ -190,22 +179,14 @@ self.assertTrue(all(pool in conn_pools for pool in dup_pools)) self.assertTrue( all( - isinstance(key, HTTPSPoolKey) + isinstance(key, PoolKey) for key in p.pools.keys()) ) - def test_https_pool_key_extra_kwargs(self): - """Assert non-HTTPSPoolKey fields are ignored when selecting a pool.""" - p = PoolManager() - conn_pool = p.connection_from_url('https://example.com/') - p.connection_pool_kw['some_kwarg'] = 'that should be ignored' - other_conn_pool = p.connection_from_url('https://example.com/') - - self.assertTrue(conn_pool is other_conn_pool) - def test_default_pool_key_funcs_copy(self): """Assert each PoolManager gets a copy of ``pool_keys_by_scheme``.""" p = PoolManager() + self.addCleanup(p.clear) self.assertEqual(p.key_fn_by_scheme, p.key_fn_by_scheme) self.assertFalse(p.key_fn_by_scheme is key_fn_by_scheme) @@ -219,6 +200,7 @@ 'ssl_version': 'SSLv23_METHOD', } p = PoolManager(5, **ssl_kw) + self.addCleanup(p.clear) conns = [] conns.append( p.connection_from_host('example.com', 443, scheme='https') @@ -242,26 +224,29 @@ def test_https_connection_from_url_case_insensitive(self): """Assert scheme case is ignored when pooling HTTPS connections.""" p = PoolManager() + self.addCleanup(p.clear) pool = p.connection_from_url('https://example.com/') other_pool = p.connection_from_url('HTTPS://EXAMPLE.COM/') self.assertEqual(1, len(p.pools)) self.assertTrue(pool is other_pool) - self.assertTrue(all(isinstance(key, HTTPSPoolKey) for key in p.pools.keys())) + self.assertTrue(all(isinstance(key, PoolKey) for key in p.pools.keys())) def test_https_connection_from_host_case_insensitive(self): """Assert scheme case is ignored when getting the https key class.""" p = PoolManager() + self.addCleanup(p.clear) pool = p.connection_from_host('example.com', scheme='https') other_pool = p.connection_from_host('EXAMPLE.COM', scheme='HTTPS') self.assertEqual(1, len(p.pools)) self.assertTrue(pool is other_pool) - self.assertTrue(all(isinstance(key, HTTPSPoolKey) for key in p.pools.keys())) + self.assertTrue(all(isinstance(key, PoolKey) for key in p.pools.keys())) def test_https_connection_from_context_case_insensitive(self): """Assert scheme case is ignored when getting the https key class.""" p = PoolManager() + self.addCleanup(p.clear) context = {'scheme': 'https', 'host': 'example.com', 'port': '443'} other_context = {'scheme': 'HTTPS', 'host': 'EXAMPLE.COM', 'port': '443'} pool = p.connection_from_context(context) @@ -269,7 +254,7 @@ self.assertEqual(1, len(p.pools)) self.assertTrue(pool is other_pool) - self.assertTrue(all(isinstance(key, HTTPSPoolKey) for key in p.pools.keys())) + self.assertTrue(all(isinstance(key, PoolKey) for key in p.pools.keys())) def test_http_connection_from_url_case_insensitive(self): """Assert scheme case is ignored when pooling HTTP connections.""" @@ -279,21 +264,33 @@ self.assertEqual(1, len(p.pools)) self.assertTrue(pool is other_pool) - self.assertTrue(all(isinstance(key, HTTPPoolKey) for key in p.pools.keys())) + self.assertTrue(all(isinstance(key, PoolKey) for key in p.pools.keys())) def test_http_connection_from_host_case_insensitive(self): """Assert scheme case is ignored when getting the https key class.""" p = PoolManager() + self.addCleanup(p.clear) pool = p.connection_from_host('example.com', scheme='http') other_pool = p.connection_from_host('EXAMPLE.COM', scheme='HTTP') self.assertEqual(1, len(p.pools)) self.assertTrue(pool is other_pool) - self.assertTrue(all(isinstance(key, HTTPPoolKey) for key in p.pools.keys())) + self.assertTrue(all(isinstance(key, PoolKey) for key in p.pools.keys())) + + def test_assert_hostname_and_fingerprint_flag(self): + """Assert that pool manager can accept hostname and fingerprint flags.""" + fingerprint = '92:81:FE:85:F7:0C:26:60:EC:D6:B3:BF:93:CF:F9:71:CC:07:7D:0A' + p = PoolManager(assert_hostname=True, assert_fingerprint=fingerprint) + self.addCleanup(p.clear) + pool = p.connection_from_url('https://example.com/') + self.assertEqual(1, len(p.pools)) + self.assertTrue(pool.assert_hostname) + self.assertEqual(fingerprint, pool.assert_fingerprint) def test_http_connection_from_context_case_insensitive(self): """Assert scheme case is ignored when getting the https key class.""" p = PoolManager() + self.addCleanup(p.clear) context = {'scheme': 'http', 'host': 'example.com', 'port': '8080'} other_context = {'scheme': 'HTTP', 'host': 'EXAMPLE.COM', 'port': '8080'} pool = p.connection_from_context(context) @@ -301,19 +298,103 @@ self.assertEqual(1, len(p.pools)) self.assertTrue(pool is other_pool) - self.assertTrue(all(isinstance(key, HTTPPoolKey) for key in p.pools.keys())) + self.assertTrue(all(isinstance(key, PoolKey) for key in p.pools.keys())) def test_custom_pool_key(self): - """Assert it is possible to define addition pool key fields.""" - custom_key = namedtuple('CustomKey', HTTPPoolKey._fields + ('my_field',)) - p = PoolManager(10, my_field='barley') - - p.key_fn_by_scheme['http'] = functools.partial(_default_key_normalizer, custom_key) - p.connection_from_url('http://example.com') - p.connection_pool_kw['my_field'] = 'wheat' - p.connection_from_url('http://example.com') + """Assert it is possible to define a custom key function.""" + p = PoolManager(10) + self.addCleanup(p.clear) + + p.key_fn_by_scheme['http'] = lambda x: tuple(x['key']) + pool1 = p.connection_from_url( + 'http://example.com', pool_kwargs={'key': 'value'}) + pool2 = p.connection_from_url( + 'http://example.com', pool_kwargs={'key': 'other'}) + pool3 = p.connection_from_url( + 'http://example.com', pool_kwargs={'key': 'value', 'x': 'y'}) self.assertEqual(2, len(p.pools)) + self.assertTrue(pool1 is pool3) + self.assertFalse(pool1 is pool2) + + def test_override_pool_kwargs_url(self): + """Assert overriding pool kwargs works with connection_from_url.""" + p = PoolManager(strict=True) + pool_kwargs = {'strict': False, 'retries': 100, 'block': True} + + default_pool = p.connection_from_url('http://example.com/') + override_pool = p.connection_from_url( + 'http://example.com/', pool_kwargs=pool_kwargs) + + self.assertTrue(default_pool.strict) + self.assertEqual(retry.Retry.DEFAULT, default_pool.retries) + self.assertFalse(default_pool.block) + + self.assertFalse(override_pool.strict) + self.assertEqual(100, override_pool.retries) + self.assertTrue(override_pool.block) + + def test_override_pool_kwargs_host(self): + """Assert overriding pool kwargs works with connection_from_host""" + p = PoolManager(strict=True) + pool_kwargs = {'strict': False, 'retries': 100, 'block': True} + + default_pool = p.connection_from_host('example.com', scheme='http') + override_pool = p.connection_from_host('example.com', scheme='http', + pool_kwargs=pool_kwargs) + + self.assertTrue(default_pool.strict) + self.assertEqual(retry.Retry.DEFAULT, default_pool.retries) + self.assertFalse(default_pool.block) + + self.assertFalse(override_pool.strict) + self.assertEqual(100, override_pool.retries) + self.assertTrue(override_pool.block) + + def test_pool_kwargs_socket_options(self): + """Assert passing socket options works with connection_from_host""" + p = PoolManager(socket_options=[]) + override_opts = [ + (socket.SOL_SOCKET, socket.SO_REUSEADDR, 1), + (socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) + ] + pool_kwargs = {'socket_options': override_opts} + + default_pool = p.connection_from_host('example.com', scheme='http') + override_pool = p.connection_from_host( + 'example.com', scheme='http', pool_kwargs=pool_kwargs + ) + + self.assertEqual(default_pool.conn_kw['socket_options'], []) + self.assertEqual( + override_pool.conn_kw['socket_options'], override_opts + ) + + def test_merge_pool_kwargs(self): + """Assert _merge_pool_kwargs works in the happy case""" + p = PoolManager(strict=True) + merged = p._merge_pool_kwargs({'new_key': 'value'}) + self.assertEqual({'strict': True, 'new_key': 'value'}, merged) + + def test_merge_pool_kwargs_none(self): + """Assert false-y values to _merge_pool_kwargs result in defaults""" + p = PoolManager(strict=True) + merged = p._merge_pool_kwargs({}) + self.assertEqual(p.connection_pool_kw, merged) + merged = p._merge_pool_kwargs(None) + self.assertEqual(p.connection_pool_kw, merged) + + def test_merge_pool_kwargs_remove_key(self): + """Assert keys can be removed with _merge_pool_kwargs""" + p = PoolManager(strict=True) + merged = p._merge_pool_kwargs({'strict': None}) + self.assertTrue('strict' not in merged) + + def test_merge_pool_kwargs_invalid_key(self): + """Assert removing invalid keys with _merge_pool_kwargs doesn't break""" + p = PoolManager(strict=True) + merged = p._merge_pool_kwargs({'invalid_key': None}) + self.assertEqual(p.connection_pool_kw, merged) if __name__ == '__main__': diff -Nru python-urllib3-1.19.1/test/test_proxymanager.py python-urllib3-1.21.1/test/test_proxymanager.py --- python-urllib3-1.19.1/test/test_proxymanager.py 2015-10-27 03:22:40.000000000 +0000 +++ python-urllib3-1.21.1/test/test_proxymanager.py 2017-04-25 11:10:19.000000000 +0000 @@ -1,11 +1,17 @@ -import unittest +import sys from urllib3.poolmanager import ProxyManager +if sys.version_info >= (2, 7): + import unittest +else: + import unittest2 as unittest + class TestProxyManager(unittest.TestCase): def test_proxy_headers(self): p = ProxyManager('http://something:1234') + self.addCleanup(p.clear) url = 'http://pypi.python.org/test' # Verify default headers @@ -34,8 +40,10 @@ def test_default_port(self): p = ProxyManager('http://something') + self.addCleanup(p.clear) self.assertEqual(p.proxy.port, 80) p = ProxyManager('https://something') + self.addCleanup(p.clear) self.assertEqual(p.proxy.port, 443) def test_invalid_scheme(self): diff -Nru python-urllib3-1.19.1/test/test_queue_monkeypatch.py python-urllib3-1.21.1/test/test_queue_monkeypatch.py --- python-urllib3-1.19.1/test/test_queue_monkeypatch.py 1970-01-01 00:00:00.000000000 +0000 +++ python-urllib3-1.21.1/test/test_queue_monkeypatch.py 2017-04-25 11:10:19.000000000 +0000 @@ -0,0 +1,37 @@ +from __future__ import absolute_import + +import mock +import sys + +import urllib3 +from urllib3.exceptions import EmptyPoolError +from urllib3.packages.six.moves import queue + +if sys.version_info >= (2, 7): + import unittest +else: + import unittest2 as unittest + + +class BadError(Exception): + """ + This should not be raised. + """ + pass + + +class TestMonkeypatchResistance(unittest.TestCase): + """ + Test that connection pool works even with a monkey patched Queue module, + see obspy/obspy#1599, kennethreitz/requests#3742, shazow/urllib3#1061. + """ + def test_queue_monkeypatching(self): + with mock.patch.object(queue, 'Empty', BadError): + http = urllib3.HTTPConnectionPool(host="localhost", block=True) + self.addCleanup(http.close) + http._get_conn(timeout=1) + self.assertRaises(EmptyPoolError, http._get_conn, timeout=1) + + +if __name__ == '__main__': + unittest.main() diff -Nru python-urllib3-1.19.1/test/test_response.py python-urllib3-1.21.1/test/test_response.py --- python-urllib3-1.19.1/test/test_response.py 2016-10-12 16:41:52.000000000 +0000 +++ python-urllib3-1.21.1/test/test_response.py 2017-05-02 09:08:45.000000000 +0000 @@ -1,5 +1,5 @@ -import unittest import socket +import sys from io import BytesIO, BufferedReader @@ -13,6 +13,11 @@ from base64 import b64decode +if sys.version_info >= (2, 7): + import unittest +else: + import unittest2 as unittest + # A known random (i.e, not-too-compressible) payload generated with: # "".join(random.choice(string.printable) for i in xrange(512)) # .encode("zlib").encode("base64") @@ -114,12 +119,15 @@ preload_content=False) self.assertEqual(r.read(3), b'') + # Buffer in case we need to switch to the raw stream + self.assertIsNotNone(r._decoder._data) self.assertEqual(r.read(1), b'f') + # Now that we've decoded data, we just stream through the decoder + self.assertIsNone(r._decoder._data) self.assertEqual(r.read(2), b'oo') self.assertEqual(r.read(), b'') self.assertEqual(r.read(), b'') - def test_chunked_decoding_deflate2(self): import zlib compress = zlib.compressobj(6, zlib.DEFLATED, -zlib.MAX_WBITS) @@ -132,11 +140,12 @@ self.assertEqual(r.read(1), b'') self.assertEqual(r.read(1), b'f') + # Once we've decoded data, we just stream to the decoder; no buffering + self.assertIsNone(r._decoder._data) self.assertEqual(r.read(2), b'oo') self.assertEqual(r.read(), b'') self.assertEqual(r.read(), b'') - def test_chunked_decoding_gzip(self): import zlib compress = zlib.compressobj(6, zlib.DEFLATED, 16 + zlib.MAX_WBITS) @@ -153,7 +162,6 @@ self.assertEqual(r.read(), b'') self.assertEqual(r.read(), b'') - def test_body_blob(self): resp = HTTPResponse(b'foo') self.assertEqual(resp.data, b'foo') @@ -179,7 +187,7 @@ resp2.close() self.assertEqual(resp2.closed, True) - #also try when only data is present. + # also try when only data is present. resp3 = HTTPResponse('foodata') self.assertRaises(IOError, resp3.fileno) @@ -233,7 +241,6 @@ # versions. Probably this is because the `io` module in py2.6 is an # old version that has a different underlying implementation. - fp = BytesIO(b'foo') resp = HTTPResponse(fp, preload_content=False) @@ -279,7 +286,7 @@ fp = BytesIO(data) resp = HTTPResponse(fp, headers={'content-encoding': 'gzip'}, - preload_content=False) + preload_content=False) stream = resp.stream(2) self.assertEqual(next(stream), b'f') @@ -295,7 +302,7 @@ fp = BytesIO(data) resp = HTTPResponse(fp, headers={'content-encoding': 'gzip'}, - preload_content=False) + preload_content=False) stream = resp.stream() # Read everything @@ -364,7 +371,7 @@ fp = BytesIO(data) resp = HTTPResponse(fp, headers={'content-encoding': 'deflate'}, - preload_content=False) + preload_content=False) stream = resp.stream(2) self.assertEqual(next(stream), b'f') @@ -379,7 +386,7 @@ fp = BytesIO(data) resp = HTTPResponse(fp, headers={'content-encoding': 'deflate'}, - preload_content=False) + preload_content=False) stream = resp.stream(2) self.assertEqual(next(stream), b'f') @@ -702,7 +709,7 @@ def _encode_chunk(self, chunk): return '%X\r\n%s%s' % (len(chunk), chunk.decode(), - "\r\n" if len(chunk) > 0 else "") + "\r\n" if len(chunk) > 0 else "") class MockChunkedEncodingWithExtensions(MockChunkedEncodingResponse): diff -Nru python-urllib3-1.19.1/test/test_retry.py python-urllib3-1.21.1/test/test_retry.py --- python-urllib3-1.19.1/test/test_retry.py 2016-11-03 15:16:16.000000000 +0000 +++ python-urllib3-1.21.1/test/test_retry.py 2017-04-25 11:28:36.000000000 +0000 @@ -16,10 +16,12 @@ def test_string(self): """ Retry string representation looks the way we expect """ retry = Retry() - self.assertEqual(str(retry), 'Retry(total=10, connect=None, read=None, redirect=None)') + self.assertEqual(str(retry), + 'Retry(total=10, connect=None, read=None, redirect=None, status=None)') for _ in range(3): - retry = retry.increment() - self.assertEqual(str(retry), 'Retry(total=7, connect=None, read=None, redirect=None)') + retry = retry.increment(method='GET') + self.assertEqual(str(retry), + 'Retry(total=7, connect=None, read=None, redirect=None, status=None)') def test_retry_both_specified(self): """Total can win if it's lower than the connect value""" @@ -45,9 +47,9 @@ """ A lower read timeout than the total is honored """ error = ReadTimeoutError(None, "/", "read timed out") retry = Retry(read=2, total=3) - retry = retry.increment(error=error) - retry = retry.increment(error=error) - self.assertRaises(MaxRetryError, retry.increment, error=error) + retry = retry.increment(method='GET', error=error) + retry = retry.increment(method='GET', error=error) + self.assertRaises(MaxRetryError, retry.increment, method='GET', error=error) def test_retry_total_none(self): """ if Total is none, connect error should take precedence """ @@ -63,9 +65,9 @@ error = ReadTimeoutError(None, "/", "read timed out") retry = Retry(connect=2, total=None) - retry = retry.increment(error=error) - retry = retry.increment(error=error) - retry = retry.increment(error=error) + retry = retry.increment(method='GET', error=error) + retry = retry.increment(method='GET', error=error) + retry = retry.increment(method='GET', error=error) self.assertFalse(retry.is_exhausted()) def test_retry_default(self): @@ -93,65 +95,77 @@ error = ReadTimeoutError(None, "/", "read timed out") retry = Retry(read=0) try: - retry.increment(error=error) + retry.increment(method='GET', error=error) self.fail("Failed to raise error.") except MaxRetryError as e: self.assertEqual(e.reason, error) + def test_status_counter(self): + resp = HTTPResponse(status=400) + retry = Retry(status=2) + retry = retry.increment(response=resp) + retry = retry.increment(response=resp) + try: + retry.increment(response=resp) + self.fail("Failed to raise error.") + except MaxRetryError as e: + self.assertEqual(str(e.reason), + ResponseError.SPECIFIC_ERROR.format(status_code=400)) + def test_backoff(self): """ Backoff is computed correctly """ max_backoff = Retry.BACKOFF_MAX retry = Retry(total=100, backoff_factor=0.2) - self.assertEqual(retry.get_backoff_time(), 0) # First request + self.assertEqual(retry.get_backoff_time(), 0) # First request - retry = retry.increment() - self.assertEqual(retry.get_backoff_time(), 0) # First retry + retry = retry.increment(method='GET') + self.assertEqual(retry.get_backoff_time(), 0) # First retry - retry = retry.increment() + retry = retry.increment(method='GET') self.assertEqual(retry.backoff_factor, 0.2) self.assertEqual(retry.total, 98) - self.assertEqual(retry.get_backoff_time(), 0.4) # Start backoff + self.assertEqual(retry.get_backoff_time(), 0.4) # Start backoff - retry = retry.increment() + retry = retry.increment(method='GET') self.assertEqual(retry.get_backoff_time(), 0.8) - retry = retry.increment() + retry = retry.increment(method='GET') self.assertEqual(retry.get_backoff_time(), 1.6) for i in xrange(10): - retry = retry.increment() + retry = retry.increment(method='GET') self.assertEqual(retry.get_backoff_time(), max_backoff) def test_zero_backoff(self): retry = Retry() self.assertEqual(retry.get_backoff_time(), 0) - retry = retry.increment() - retry = retry.increment() + retry = retry.increment(method='GET') + retry = retry.increment(method='GET') self.assertEqual(retry.get_backoff_time(), 0) def test_backoff_reset_after_redirect(self): retry = Retry(total=100, redirect=5, backoff_factor=0.2) - retry = retry.increment() - retry = retry.increment() + retry = retry.increment(method='GET') + retry = retry.increment(method='GET') self.assertEqual(retry.get_backoff_time(), 0.4) redirect_response = HTTPResponse(status=302, headers={'location': 'test'}) - retry = retry.increment(response=redirect_response) + retry = retry.increment(method='GET', response=redirect_response) self.assertEqual(retry.get_backoff_time(), 0) - retry = retry.increment() - retry = retry.increment() + retry = retry.increment(method='GET') + retry = retry.increment(method='GET') self.assertEqual(retry.get_backoff_time(), 0.4) def test_sleep(self): # sleep a very small amount of time so our code coverage is happy retry = Retry(backoff_factor=0.0001) - retry = retry.increment() - retry = retry.increment() + retry = retry.increment(method='GET') + retry = retry.increment(method='GET') retry.sleep() def test_status_forcelist(self): - retry = Retry(status_forcelist=xrange(500,600)) + retry = Retry(status_forcelist=xrange(500, 600)) self.assertFalse(retry.is_retry('GET', status_code=200)) self.assertFalse(retry.is_retry('GET', status_code=400)) self.assertTrue(retry.is_retry('GET', status_code=500)) @@ -178,16 +192,17 @@ def test_exhausted(self): self.assertFalse(Retry(0).is_exhausted()) self.assertTrue(Retry(-1).is_exhausted()) - self.assertEqual(Retry(1).increment().total, 0) + self.assertEqual(Retry(1).increment(method='GET').total, 0) def test_disabled(self): - self.assertRaises(MaxRetryError, Retry(-1).increment) - self.assertRaises(MaxRetryError, Retry(0).increment) + self.assertRaises(MaxRetryError, Retry(-1).increment, method='GET') + self.assertRaises(MaxRetryError, Retry(0).increment, method='GET') def test_error_message(self): retry = Retry(total=0) try: - retry = retry.increment(error=ReadTimeoutError(None, "/", "read timed out")) + retry = retry.increment(method='GET', + error=ReadTimeoutError(None, "/", "read timed out")) raise AssertionError("Should have raised a MaxRetryError") except MaxRetryError as e: assert 'Caused by redirect' not in str(e) @@ -225,17 +240,27 @@ self.assertEqual(str(e.reason), 'conntimeout') def test_history(self): - retry = Retry(total=10) + retry = Retry(total=10, method_whitelist=frozenset(['GET', 'POST'])) self.assertEqual(retry.history, tuple()) connection_error = ConnectTimeoutError('conntimeout') retry = retry.increment('GET', '/test1', None, connection_error) - self.assertEqual(retry.history, (RequestHistory('GET', '/test1', connection_error, None, None),)) + history = (RequestHistory('GET', '/test1', connection_error, None, None),) + self.assertEqual(retry.history, history) + read_error = ReadTimeoutError(None, "/test2", "read timed out") retry = retry.increment('POST', '/test2', None, read_error) - self.assertEqual(retry.history, (RequestHistory('GET', '/test1', connection_error, None, None), - RequestHistory('POST', '/test2', read_error, None, None))) + history = (RequestHistory('GET', '/test1', connection_error, None, None), + RequestHistory('POST', '/test2', read_error, None, None)) + self.assertEqual(retry.history, history) + response = HTTPResponse(status=500) retry = retry.increment('GET', '/test3', response, None) - self.assertEqual(retry.history, (RequestHistory('GET', '/test1', connection_error, None, None), - RequestHistory('POST', '/test2', read_error, None, None), - RequestHistory('GET', '/test3', None, 500, None))) + history = (RequestHistory('GET', '/test1', connection_error, None, None), + RequestHistory('POST', '/test2', read_error, None, None), + RequestHistory('GET', '/test3', None, 500, None)) + self.assertEqual(retry.history, history) + + def test_retry_method_not_in_whitelist(self): + error = ReadTimeoutError(None, "/", "read timed out") + retry = Retry() + self.assertRaises(ReadTimeoutError, retry.increment, method='POST', error=error) diff -Nru python-urllib3-1.19.1/test/test_selectors.py python-urllib3-1.21.1/test/test_selectors.py --- python-urllib3-1.19.1/test/test_selectors.py 1970-01-01 00:00:00.000000000 +0000 +++ python-urllib3-1.21.1/test/test_selectors.py 2017-05-02 09:08:45.000000000 +0000 @@ -0,0 +1,795 @@ +from __future__ import with_statement +import errno +import os +import psutil +import select +import signal +import sys +import time +import threading + +try: # Python 2.6 unittest module doesn't have skip decorators. + from unittest import skipIf, skipUnless + import unittest +except ImportError: + from unittest2 import skipIf, skipUnless + import unittest2 as unittest + +try: # Python 2.x doesn't define time.perf_counter. + from time import perf_counter as get_time +except ImportError: + from time import time as get_time + +try: # Python 2.6 doesn't have the resource module. + import resource +except ImportError: + resource = None + +try: # Windows doesn't support socketpair on Python 3.5< + from socket import socketpair +except ImportError: + from .socketpair_helper import socketpair + +from urllib3.util import ( + selectors, + wait +) + +HAS_ALARM = hasattr(signal, "alarm") + +LONG_SELECT = 0.2 +SHORT_SELECT = 0.01 + +# Tolerance values for timer/speed fluctuations. +TOLERANCE = 0.75 + +# Detect whether we're running on Travis or AppVeyor. This +# is used to skip some verification points inside of tests to +# not randomly fail our CI due to wild timer/speed differences. +TRAVIS_CI = "TRAVIS" in os.environ +APPVEYOR = "APPVEYOR" in os.environ + + +skipUnlessHasSelector = skipUnless(selectors.HAS_SELECT, "Platform doesn't have a selector") +skipUnlessHasENOSYS = skipUnless(hasattr(errno, 'ENOSYS'), "Platform doesn't have errno.ENOSYS") +skipUnlessHasAlarm = skipUnless(hasattr(signal, 'alarm'), "Platform doesn't have signal.alarm()") + + +def patch_select_module(testcase, *keep, **replace): + """ Helper function that removes all selectors from the select module + except those listed in *keep and **replace. Those in keep will be kept + if they exist in the select module and those in replace will be patched + with the value that is given regardless if they exist or not. Cleanup + will restore previous state. This helper also resets the selectors module + so that a call to DefaultSelector() will do feature detection again. """ + selectors._DEFAULT_SELECTOR = None + for s in ['select', 'poll', 'epoll', 'kqueue']: + if s in replace: + if hasattr(select, s): + old_selector = getattr(select, s) + testcase.addCleanup(setattr, select, s, old_selector) + else: + testcase.addCleanup(delattr, select, s) + setattr(select, s, replace[s]) + elif s not in keep and hasattr(select, s): + old_selector = getattr(select, s) + testcase.addCleanup(setattr, select, s, old_selector) + delattr(select, s) + + +class AlarmThread(threading.Thread): + def __init__(self, timeout): + super(AlarmThread, self).__init__(group=None) + self.setDaemon(True) + self.timeout = timeout + self.canceled = False + + def cancel(self): + self.canceled = True + + def run(self): + time.sleep(self.timeout) + if not self.canceled: + os.kill(os.getpid(), signal.SIGALRM) + + +class AlarmMixin(object): + alarm_thread = None + + def _begin_alarm_thread(self, timeout): + self.addCleanup(self._cancel_alarm_thread) + self.alarm_thread = AlarmThread(timeout) + self.alarm_thread.start() + + def _cancel_alarm_thread(self): + if self.alarm_thread is not None: + self.alarm_thread.cancel() + self.alarm_thread.join(0.0) + self.alarm_thread = None + + def set_alarm(self, duration, handler): + sigalrm_handler = signal.signal(signal.SIGALRM, handler) + self.addCleanup(signal.signal, signal.SIGALRM, sigalrm_handler) + self._begin_alarm_thread(duration) + + +class TimerContext(object): + def __init__(self, testcase, lower=None, upper=None): + self.testcase = testcase + self.lower = lower + self.upper = upper + self.start_time = None + self.end_time = None + + def __enter__(self): + self.start_time = get_time() + + def __exit__(self, *args, **kwargs): + self.end_time = get_time() + total_time = self.end_time - self.start_time + + # Skip timing on CI due to flakiness. + if TRAVIS_CI or APPVEYOR: + return + + if self.lower is not None: + self.testcase.assertGreaterEqual(total_time, self.lower * (1.0 - TOLERANCE)) + if self.upper is not None: + self.testcase.assertLessEqual(total_time, self.upper * (1.0 + TOLERANCE)) + + +class TimerMixin(object): + def assertTakesTime(self, lower=None, upper=None): + return TimerContext(self, lower=lower, upper=upper) + + +@skipUnlessHasSelector +class BaseSelectorTestCase(unittest.TestCase, AlarmMixin, TimerMixin): + """ Implements the tests that each type of selector must pass. """ + + def make_socketpair(self): + rd, wr = socketpair() + + # Make non-blocking so we get errors if the + # sockets are interacted with but not ready. + rd.settimeout(0.0) + wr.settimeout(0.0) + + self.addCleanup(rd.close) + self.addCleanup(wr.close) + return rd, wr + + def make_selector(self): + s = selectors.DefaultSelector() + self.addCleanup(s.close) + return s + + def standard_setup(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + s.register(rd, selectors.EVENT_READ) + s.register(wr, selectors.EVENT_WRITE) + return s, rd, wr + + def test_get_key(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + + key = s.register(rd, selectors.EVENT_READ, "data") + self.assertEqual(key, s.get_key(rd)) + + # Unknown fileobj + self.assertRaises(KeyError, s.get_key, 999999) + + def test_get_map(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + + keys = s.get_map() + self.assertFalse(keys) + self.assertEqual(len(keys), 0) + self.assertEqual(list(keys), []) + key = s.register(rd, selectors.EVENT_READ, "data") + self.assertIn(rd, keys) + self.assertEqual(key, keys[rd]) + self.assertEqual(len(keys), 1) + self.assertEqual(list(keys), [rd.fileno()]) + self.assertEqual(list(keys.values()), [key]) + + # Unknown fileobj + self.assertRaises(KeyError, keys.__getitem__, 999999) + + # Read-only mapping + with self.assertRaises(TypeError): + del keys[rd] + + # Doesn't define __setitem__ + with self.assertRaises(TypeError): + keys[rd] = key + + def test_register(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + + # Ensure that the file is not yet added. + self.assertEqual(0, len(s.get_map())) + self.assertRaises(KeyError, lambda: s.get_map()[rd.fileno()]) + self.assertRaises(KeyError, s.get_key, rd) + self.assertEqual(None, s._key_from_fd(rd.fileno())) + + data = object() + key = s.register(rd, selectors.EVENT_READ, data) + self.assertIsInstance(key, selectors.SelectorKey) + self.assertEqual(key.fileobj, rd) + self.assertEqual(key.fd, rd.fileno()) + self.assertEqual(key.events, selectors.EVENT_READ) + self.assertIs(key.data, data) + self.assertEqual(1, len(s.get_map())) + for fd in s.get_map(): + self.assertEqual(fd, rd.fileno()) + + def test_register_bad_event(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + + self.assertRaises(ValueError, s.register, rd, 99999) + + def test_register_negative_fd(self): + s = self.make_selector() + self.assertRaises(ValueError, s.register, -1, selectors.EVENT_READ) + + def test_register_invalid_fileobj(self): + s = self.make_selector() + self.assertRaises(ValueError, s.register, "string", selectors.EVENT_READ) + + def test_reregister_fd_same_fileobj(self): + s, rd, wr = self.standard_setup() + self.assertRaises(KeyError, s.register, rd, selectors.EVENT_READ) + + def test_reregister_fd_different_fileobj(self): + s, rd, wr = self.standard_setup() + self.assertRaises(KeyError, s.register, rd.fileno(), selectors.EVENT_READ) + + def test_context_manager(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + + with s as sel: + rd_key = sel.register(rd, selectors.EVENT_READ) + wr_key = sel.register(wr, selectors.EVENT_WRITE) + self.assertEqual(rd_key, sel.get_key(rd)) + self.assertEqual(wr_key, sel.get_key(wr)) + + self.assertRaises(RuntimeError, s.get_key, rd) + self.assertRaises(RuntimeError, s.get_key, wr) + + def test_unregister(self): + s, rd, wr = self.standard_setup() + s.unregister(rd) + + self.assertRaises(KeyError, s.unregister, 99999) + + def test_reunregister(self): + s, rd, wr = self.standard_setup() + s.unregister(rd) + + self.assertRaises(KeyError, s.unregister, rd) + + def test_unregister_after_fd_close(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + rdfd = rd.fileno() + wrfd = wr.fileno() + s.register(rdfd, selectors.EVENT_READ) + s.register(wrfd, selectors.EVENT_WRITE) + + rd.close() + wr.close() + + s.unregister(rdfd) + s.unregister(wrfd) + + self.assertEqual(0, len(s.get_map())) + + def test_unregister_after_fileobj_close(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + s.register(rd, selectors.EVENT_READ) + s.register(wr, selectors.EVENT_WRITE) + + rd.close() + wr.close() + + s.unregister(rd) + s.unregister(wr) + + self.assertEqual(0, len(s.get_map())) + + @skipUnless(os.name == "posix", "Platform doesn't support os.dup2") + def test_unregister_after_reuse_fd(self): + s, rd, wr = self.standard_setup() + rdfd = rd.fileno() + wrfd = wr.fileno() + + rd2, wr2 = self.make_socketpair() + rd.close() + wr.close() + os.dup2(rd2.fileno(), rdfd) + os.dup2(wr2.fileno(), wrfd) + + s.unregister(rdfd) + s.unregister(wrfd) + + self.assertEqual(0, len(s.get_map())) + + def test_modify(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + + key = s.register(rd, selectors.EVENT_READ) + + # Modify events + key2 = s.modify(rd, selectors.EVENT_WRITE) + self.assertNotEqual(key.events, key2.events) + self.assertEqual(key2, s.get_key(rd)) + + s.unregister(rd) + + # Modify data + d1 = object() + d2 = object() + + key = s.register(rd, selectors.EVENT_READ, d1) + key2 = s.modify(rd, selectors.EVENT_READ, d2) + self.assertEqual(key.events, key2.events) + self.assertIsNot(key.data, key2.data) + self.assertEqual(key2, s.get_key(rd)) + self.assertIs(key2.data, d2) + + # Modify invalid fileobj + self.assertRaises(KeyError, s.modify, 999999, selectors.EVENT_READ) + + def test_empty_select(self): + s = self.make_selector() + self.assertEqual([], s.select(timeout=SHORT_SELECT)) + + def test_select_multiple_event_types(self): + s = self.make_selector() + + rd, wr = self.make_socketpair() + key = s.register(rd, selectors.EVENT_READ | selectors.EVENT_WRITE) + + self.assertEqual([(key, selectors.EVENT_WRITE)], s.select(0.001)) + + wr.send(b'x') + time.sleep(0.01) # Wait for the write to flush. + + self.assertEqual([(key, selectors.EVENT_READ | selectors.EVENT_WRITE)], s.select(0.001)) + + def test_select_multiple_selectors(self): + s1 = self.make_selector() + s2 = self.make_selector() + rd, wr = self.make_socketpair() + key1 = s1.register(rd, selectors.EVENT_READ) + key2 = s2.register(rd, selectors.EVENT_READ) + + wr.send(b'x') + time.sleep(0.01) # Wait for the write to flush. + + self.assertEqual([(key1, selectors.EVENT_READ)], s1.select(timeout=0.001)) + self.assertEqual([(key2, selectors.EVENT_READ)], s2.select(timeout=0.001)) + + def test_select_no_event_types(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + self.assertRaises(ValueError, s.register, rd, 0) + + def test_select_many_events(self): + s = self.make_selector() + readers = [] + writers = [] + for _ in range(32): + rd, wr = self.make_socketpair() + readers.append(rd) + writers.append(wr) + s.register(rd, selectors.EVENT_READ) + + self.assertEqual(0, len(s.select(0.001))) + + # Write a byte to each end. + for wr in writers: + wr.send(b'x') + + # Give time to flush the writes. + time.sleep(0.01) + + ready = s.select(0.001) + self.assertEqual(32, len(ready)) + for key, events in ready: + self.assertEqual(selectors.EVENT_READ, events) + self.assertIn(key.fileobj, readers) + + # Now read the byte from each endpoint. + for rd in readers: + data = rd.recv(1) + self.assertEqual(b'x', data) + + self.assertEqual(0, len(s.select(0.001))) + + def test_select_timeout_none(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + s.register(wr, selectors.EVENT_WRITE) + + with self.assertTakesTime(upper=SHORT_SELECT): + self.assertEqual(1, len(s.select(timeout=None))) + + def test_select_timeout_ready(self): + s, rd, wr = self.standard_setup() + + with self.assertTakesTime(upper=SHORT_SELECT): + self.assertEqual(1, len(s.select(timeout=0))) + self.assertEqual(1, len(s.select(timeout=-1))) + self.assertEqual(1, len(s.select(timeout=0.001))) + + def test_select_timeout_not_ready(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + s.register(rd, selectors.EVENT_READ) + + with self.assertTakesTime(upper=SHORT_SELECT): + self.assertEqual(0, len(s.select(timeout=0))) + + with self.assertTakesTime(lower=SHORT_SELECT, upper=SHORT_SELECT): + self.assertEqual(0, len(s.select(timeout=SHORT_SELECT))) + + @skipUnlessHasAlarm + def test_select_timing(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + key = s.register(rd, selectors.EVENT_READ) + + self.set_alarm(SHORT_SELECT, lambda *args: wr.send(b'x')) + + with self.assertTakesTime(upper=SHORT_SELECT): + ready = s.select(LONG_SELECT) + self.assertEqual([(key, selectors.EVENT_READ)], ready) + + @skipUnlessHasAlarm + def test_select_interrupt_no_event(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + s.register(rd, selectors.EVENT_READ) + + self.set_alarm(SHORT_SELECT, lambda *args: None) + + with self.assertTakesTime(lower=LONG_SELECT, upper=LONG_SELECT): + self.assertEqual([], s.select(LONG_SELECT)) + + @skipUnlessHasAlarm + def test_select_interrupt_with_event(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + s.register(rd, selectors.EVENT_READ) + key = s.get_key(rd) + + self.set_alarm(SHORT_SELECT, lambda *args: wr.send(b'x')) + + with self.assertTakesTime(lower=SHORT_SELECT, upper=SHORT_SELECT): + self.assertEqual([(key, selectors.EVENT_READ)], s.select(LONG_SELECT)) + self.assertEqual(rd.recv(1), b'x') + + @skipUnlessHasAlarm + def test_select_multiple_interrupts_with_event(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + s.register(rd, selectors.EVENT_READ) + key = s.get_key(rd) + + def second_alarm(*args): + wr.send(b'x') + + def first_alarm(*args): + self._begin_alarm_thread(SHORT_SELECT) + signal.signal(signal.SIGALRM, second_alarm) + + self.set_alarm(SHORT_SELECT, first_alarm) + + with self.assertTakesTime(lower=SHORT_SELECT * 2, upper=SHORT_SELECT * 2): + self.assertEqual([(key, selectors.EVENT_READ)], s.select(LONG_SELECT)) + self.assertEqual(rd.recv(1), b'x') + + @skipUnlessHasAlarm + def test_selector_error(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + s.register(rd, selectors.EVENT_READ) + + def alarm_exception(*args): + err = OSError() + err.errno = errno.EACCES + raise err + + self.set_alarm(SHORT_SELECT, alarm_exception) + + try: + s.select(LONG_SELECT) + except selectors.SelectorError as e: + self.assertEqual(e.errno, errno.EACCES) + except Exception as e: + self.fail("Raised incorrect exception: " + str(e)) + else: + self.fail("select() didn't raise SelectorError") + + # Test ensures that _syscall_wrapper properly raises the + # exception that is raised from an interrupt handler. + @skipUnlessHasAlarm + def test_select_interrupt_exception(self): + s = self.make_selector() + rd, wr = self.make_socketpair() + s.register(rd, selectors.EVENT_READ) + + class AlarmInterrupt(Exception): + pass + + def alarm_exception(*args): + raise AlarmInterrupt() + + self.set_alarm(SHORT_SELECT, alarm_exception) + + with self.assertTakesTime(lower=SHORT_SELECT, upper=SHORT_SELECT): + self.assertRaises(AlarmInterrupt, s.select, LONG_SELECT) + + def test_fileno(self): + s = self.make_selector() + if hasattr(s, "fileno"): + fd = s.fileno() + self.assertTrue(isinstance(fd, int)) + self.assertGreaterEqual(fd, 0) + else: + self.skipTest("Selector doesn't implement fileno()") + + # According to the psutil docs, open_files() has strange behavior + # on Windows including giving back incorrect results so to + # stop random failures from occurring we're skipping on Windows. + @skipIf(sys.platform == "win32", "psutil.Process.open_files() is unstable on Windows.") + def test_leaking_fds(self): + proc = psutil.Process() + before_fds = len(proc.open_files()) + s = self.make_selector() + s.close() + after_fds = len(proc.open_files()) + self.assertEqual(before_fds, after_fds) + + def test_selector_error_exception(self): + err = selectors.SelectorError(1) + self.assertEqual(err.__repr__(), "") + self.assertEqual(err.__str__(), "") + + +class BaseWaitForTestCase(unittest.TestCase, TimerMixin, AlarmMixin): + def make_socketpair(self): + rd, wr = socketpair() + + # Make non-blocking so we get errors if the + # sockets are interacted with but not ready. + rd.settimeout(0.0) + wr.settimeout(0.0) + + self.addCleanup(rd.close) + self.addCleanup(wr.close) + return rd, wr + + def test_wait_for_read_single_socket(self): + rd, wr = self.make_socketpair() + self.assertEqual([], wait.wait_for_read(rd, timeout=SHORT_SELECT)) + + def test_wait_for_read_multiple_socket(self): + rd, rd2 = self.make_socketpair() + self.assertEqual([], wait.wait_for_read([rd, rd2], timeout=SHORT_SELECT)) + + def test_wait_for_read_empty(self): + self.assertEqual([], wait.wait_for_read([], timeout=SHORT_SELECT)) + + def test_wait_for_write_single_socket(self): + wr, wr2 = self.make_socketpair() + self.assertEqual([wr], wait.wait_for_write(wr, timeout=SHORT_SELECT)) + + def test_wait_for_write_multiple_socket(self): + wr, wr2 = self.make_socketpair() + result = wait.wait_for_write([wr, wr2], timeout=SHORT_SELECT) + # assertItemsEqual renamed in Python 3.x + if hasattr(self, "assertItemsEqual"): + self.assertItemsEqual([wr, wr2], result) + else: + self.assertCountEqual([wr, wr2], result) + + def test_wait_for_write_empty(self): + self.assertEqual([], wait.wait_for_write([], timeout=SHORT_SELECT)) + + def test_wait_for_non_list_iterable(self): + rd, wr = self.make_socketpair() + iterable = {'rd': rd}.values() + self.assertEqual([], wait.wait_for_read(iterable, timeout=SHORT_SELECT)) + + def test_wait_timeout(self): + rd, wr = self.make_socketpair() + with self.assertTakesTime(lower=SHORT_SELECT, upper=SHORT_SELECT): + wait.wait_for_read([rd], timeout=SHORT_SELECT) + + def test_wait_io_close_is_called(self): + selector = selectors.DefaultSelector() + self.addCleanup(selector.close) + + def fake_constructor(): + return selector + + old_selector = wait.DefaultSelector + wait.DefaultSelector = fake_constructor + self.addCleanup(setattr, wait, "DefaultSelector", old_selector) + + rd, wr = self.make_socketpair() + wait.wait_for_write([rd, wr], 0.001) + self.assertIs(selector._map, None) + + @skipUnlessHasAlarm + def test_interrupt_wait_for_read_no_event(self): + rd, wr = self.make_socketpair() + + self.set_alarm(SHORT_SELECT, lambda *args: None) + with self.assertTakesTime(lower=LONG_SELECT, upper=LONG_SELECT): + self.assertEqual([], wait.wait_for_read(rd, timeout=LONG_SELECT)) + + @skipUnlessHasAlarm + def test_interrupt_wait_for_read_with_event(self): + rd, wr = self.make_socketpair() + + self.set_alarm(SHORT_SELECT, lambda *args: wr.send(b'x')) + with self.assertTakesTime(lower=SHORT_SELECT, upper=SHORT_SELECT): + self.assertEqual([rd], wait.wait_for_read(rd, timeout=LONG_SELECT)) + self.assertEqual(rd.recv(1), b'x') + + +class ScalableSelectorMixin(object): + """ Mixin to test selectors that allow more fds than FD_SETSIZE """ + @skipUnless(resource, "Could not import the resource module") + def test_above_fd_setsize(self): + # A scalable implementation should have no problem with more than + # FD_SETSIZE file descriptors. Since we don't know the value, we just + # try to set the soft RLIMIT_NOFILE to the hard RLIMIT_NOFILE ceiling. + soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE) + if hard == resource.RLIM_INFINITY: + self.skipTest("RLIMIT_NOFILE is infinite") + + try: # If we're on a *BSD system, the limit tag is different. + _, bsd_hard = resource.getrlimit(resource.RLIMIT_OFILE) + if bsd_hard == resource.RLIM_INFINITY: + self.skipTest("RLIMIT_OFILE is infinite") + if bsd_hard < hard: + hard = bsd_hard + + # NOTE: AttributeError resource.RLIMIT_OFILE is not defined on Mac OS. + except (OSError, resource.error, AttributeError): + pass + + try: + resource.setrlimit(resource.RLIMIT_NOFILE, (hard, hard)) + self.addCleanup(resource.setrlimit, resource.RLIMIT_NOFILE, + (soft, hard)) + limit_nofile = min(hard, 2 ** 16) + except (OSError, ValueError): + limit_nofile = soft + + # Guard against already allocated FDs + limit_nofile -= 256 + limit_nofile = max(0, limit_nofile) + + s = self.make_selector() + + for i in range(limit_nofile // 2): + rd, wr = self.make_socketpair() + s.register(rd, selectors.EVENT_READ) + s.register(wr, selectors.EVENT_WRITE) + + self.assertEqual(limit_nofile // 2, len(s.select())) + + +@skipUnlessHasSelector +class TestUniqueSelectScenarios(BaseSelectorTestCase): + def test_select_module_patched_after_import(self): + # This test is to make sure that after import time + # calling DefaultSelector() will still give a good + # return value. This issue is caused by gevent, eventlet. + + # Now remove all selectors except `select.select`. + patch_select_module(self, 'select') + + # Make sure that the selector returned only uses the selector available. + selector = self.make_selector() + self.assertIsInstance(selector, selectors.SelectSelector) + + @skipUnlessHasENOSYS + def test_select_module_defines_does_not_implement_poll(self): + # This test is to make sure that if a platform defines + # a selector as being available but does not actually + # implement it (kennethreitz/requests#3906) then + # DefaultSelector() does not fail. + + # Reset the _DEFAULT_SELECTOR value as if using for the first time. + selectors._DEFAULT_SELECTOR = None + + # Now we're going to patch in a bad `poll`. + class BadPoll(object): + def poll(self, timeout): + raise OSError(errno.ENOSYS) + + # Remove all selectors except `select.select` and replace `select.poll`. + patch_select_module(self, 'select', poll=BadPoll) + + selector = self.make_selector() + self.assertIsInstance(selector, selectors.SelectSelector) + + @skipUnlessHasENOSYS + def test_select_module_defines_does_not_implement_epoll(self): + # Same as above test except with `select.epoll`. + + # Reset the _DEFAULT_SELECTOR value as if using for the first time. + selectors._DEFAULT_SELECTOR = None + + # Now we're going to patch in a bad `epoll`. + def bad_epoll(*args, **kwargs): + raise OSError(errno.ENOSYS) + + # Remove all selectors except `select.select` and replace `select.epoll`. + patch_select_module(self, 'select', epoll=bad_epoll) + + selector = self.make_selector() + self.assertIsInstance(selector, selectors.SelectSelector) + + +@skipUnless(hasattr(selectors, "SelectSelector"), "Platform doesn't have a SelectSelector") +class SelectSelectorTestCase(BaseSelectorTestCase): + def setUp(self): + patch_select_module(self, 'select') + + +@skipUnless(hasattr(selectors, "PollSelector"), "Platform doesn't have a PollSelector") +class PollSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixin): + def setUp(self): + patch_select_module(self, 'poll') + + +@skipUnless(hasattr(selectors, "EpollSelector"), "Platform doesn't have an EpollSelector") +class EpollSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixin): + def setUp(self): + patch_select_module(self, 'epoll') + + +@skipUnless(hasattr(selectors, "KqueueSelector"), "Platform doesn't have a KqueueSelector") +class KqueueSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixin): + def setUp(self): + patch_select_module(self, 'kqueue') + + +@skipUnless(hasattr(selectors, "SelectSelector"), "Platform doesn't have a SelectSelector") +class SelectWaitForTestCase(BaseWaitForTestCase): + def setUp(self): + patch_select_module(self, 'select') + + +@skipUnless(hasattr(selectors, "PollSelector"), "Platform doesn't have a PollSelector") +class PollWaitForTestCase(BaseWaitForTestCase): + def setUp(self): + patch_select_module(self, 'poll') + + +@skipUnless(hasattr(selectors, "EpollSelector"), "Platform doesn't have an EpollSelector") +class EpollWaitForTestCase(BaseWaitForTestCase): + def setUp(self): + patch_select_module(self, 'epoll') + + +@skipUnless(hasattr(selectors, "KqueueSelector"), "Platform doesn't have a KqueueSelector") +class KqueueWaitForTestCase(BaseWaitForTestCase): + def setUp(self): + patch_select_module(self, 'kqueue') diff -Nru python-urllib3-1.19.1/test/test_util.py python-urllib3-1.21.1/test/test_util.py --- python-urllib3-1.19.1/test/test_util.py 2016-11-03 15:16:16.000000000 +0000 +++ python-urllib3-1.21.1/test/test_util.py 2017-04-25 11:10:19.000000000 +0000 @@ -1,6 +1,7 @@ import hashlib import warnings import logging +import io import unittest import ssl import socket @@ -9,7 +10,7 @@ from mock import patch, Mock from urllib3 import add_stderr_logger, disable_warnings -from urllib3.util.request import make_headers +from urllib3.util.request import make_headers, rewind_body, _FAILEDTELL from urllib3.util.retry import Retry from urllib3.util.timeout import Timeout from urllib3.util.url import ( @@ -28,9 +29,9 @@ LocationParseError, TimeoutStateError, InsecureRequestWarning, - SSLError, SNIMissingWarning, InvalidHeader, + UnrewindableBodyError, ) from urllib3.util.connection import ( allowed_gai_family, @@ -46,6 +47,7 @@ # numbers used for timeouts TIMEOUT_EPOCH = 1000 + class TestUtil(unittest.TestCase): def test_get_host(self): url_host_map = { @@ -81,8 +83,10 @@ 'http://[2a00:1450:4001:c01::67]:80/test': ('http', '[2a00:1450:4001:c01::67]', 80), # More IPv6 from http://www.ietf.org/rfc/rfc2732.txt - 'http://[fedc:ba98:7654:3210:fedc:ba98:7654:3210]:8000/index.html': ('http', '[fedc:ba98:7654:3210:fedc:ba98:7654:3210]', 8000), - 'http://[1080:0:0:0:8:800:200c:417a]/index.html': ('http', '[1080:0:0:0:8:800:200c:417a]', None), + 'http://[fedc:ba98:7654:3210:fedc:ba98:7654:3210]:8000/index.html': ( + 'http', '[fedc:ba98:7654:3210:fedc:ba98:7654:3210]', 8000), + 'http://[1080:0:0:0:8:800:200c:417a]/index.html': ( + 'http', '[1080:0:0:0:8:800:200c:417a]', None), 'http://[3ffe:2a00:100:7031::1]': ('http', '[3ffe:2a00:100:7031::1]', None), 'http://[1080::8:800:200c:417a]/foo': ('http', '[1080::8:800:200c:417a]', None), 'http://[::192.9.5.5]/ipng': ('http', '[::192.9.5.5]', None), @@ -107,7 +111,10 @@ self.assertRaises(LocationParseError, get_host, location) def test_host_normalization(self): - """Asserts the scheme and host is normalized to lower-case.""" + """ + Asserts the scheme and hosts with a normalizable scheme are + converted to lower-case. + """ url_host_map = { # Hosts 'HTTP://GOOGLE.COM/mail/': ('http', 'google.com', None), @@ -117,8 +124,13 @@ '173.194.35.7': ('http', '173.194.35.7', None), 'HTTP://173.194.35.7': ('http', '173.194.35.7', None), 'HTTP://[2a00:1450:4001:c01::67]:80/test': ('http', '[2a00:1450:4001:c01::67]', 80), - 'HTTP://[FEDC:BA98:7654:3210:FEDC:BA98:7654:3210]:8000/index.html': ('http', '[fedc:ba98:7654:3210:fedc:ba98:7654:3210]', 8000), - 'HTTPS://[1080:0:0:0:8:800:200c:417A]/index.html': ('https', '[1080:0:0:0:8:800:200c:417a]', None), + 'HTTP://[FEDC:BA98:7654:3210:FEDC:BA98:7654:3210]:8000/index.html': ( + 'http', '[fedc:ba98:7654:3210:fedc:ba98:7654:3210]', 8000), + 'HTTPS://[1080:0:0:0:8:800:200c:417A]/index.html': ( + 'https', '[1080:0:0:0:8:800:200c:417a]', None), + 'abOut://eXamPlE.com?info=1': ('about', 'eXamPlE.com', None), + 'http+UNIX://%2fvar%2frun%2fSOCKET/path': ( + 'http+unix', '%2fvar%2frun%2fSOCKET', None), } for url, expected_host in url_host_map.items(): returned_host = get_host(url) @@ -128,7 +140,8 @@ """Assert parse_url normalizes the scheme/host, and only the scheme/host""" test_urls = [ ('HTTP://GOOGLE.COM/MAIL/', 'http://google.com/MAIL/'), - ('HTTP://JeremyCline:Hunter2@Example.com:8080/', 'http://JeremyCline:Hunter2@example.com:8080/'), + ('HTTP://JeremyCline:Hunter2@Example.com:8080/', + 'http://JeremyCline:Hunter2@example.com:8080/'), ('HTTPS://Example.Com/?Key=Value', 'https://example.com/?Key=Value'), ('Https://Example.Com/#Fragment', 'https://example.com/#Fragment'), ] @@ -136,34 +149,39 @@ actual_normalized_url = parse_url(url).url self.assertEqual(actual_normalized_url, expected_normalized_url) - parse_url_host_map = { - 'http://google.com/mail': Url('http', host='google.com', path='/mail'), - 'http://google.com/mail/': Url('http', host='google.com', path='/mail/'), - 'http://google.com/mail': Url('http', host='google.com', path='mail'), - 'google.com/mail': Url(host='google.com', path='/mail'), - 'http://google.com/': Url('http', host='google.com', path='/'), - 'http://google.com': Url('http', host='google.com'), - 'http://google.com?foo': Url('http', host='google.com', path='', query='foo'), + parse_url_host_map = [ + ('http://google.com/mail', Url('http', host='google.com', path='/mail')), + ('http://google.com/mail/', Url('http', host='google.com', path='/mail/')), + ('http://google.com/mail', Url('http', host='google.com', path='mail')), + ('google.com/mail', Url(host='google.com', path='/mail')), + ('http://google.com/', Url('http', host='google.com', path='/')), + ('http://google.com', Url('http', host='google.com')), + ('http://google.com?foo', Url('http', host='google.com', path='', query='foo')), # Path/query/fragment - '': Url(), - '/': Url(path='/'), - '#?/!google.com/?foo#bar': Url(path='', fragment='?/!google.com/?foo#bar'), - '/foo': Url(path='/foo'), - '/foo?bar=baz': Url(path='/foo', query='bar=baz'), - '/foo?bar=baz#banana?apple/orange': Url(path='/foo', query='bar=baz', fragment='banana?apple/orange'), + ('', Url()), + ('/', Url(path='/')), + ('#?/!google.com/?foo#bar', Url(path='', fragment='?/!google.com/?foo#bar')), + ('/foo', Url(path='/foo')), + ('/foo?bar=baz', Url(path='/foo', query='bar=baz')), + ('/foo?bar=baz#banana?apple/orange', Url(path='/foo', + query='bar=baz', + fragment='banana?apple/orange')), # Port - 'http://google.com/': Url('http', host='google.com', path='/'), - 'http://google.com:80/': Url('http', host='google.com', port=80, path='/'), - 'http://google.com:80': Url('http', host='google.com', port=80), + ('http://google.com/', Url('http', host='google.com', path='/')), + ('http://google.com:80/', Url('http', host='google.com', port=80, path='/')), + ('http://google.com:80', Url('http', host='google.com', port=80)), # Auth - 'http://foo:bar@localhost/': Url('http', auth='foo:bar', host='localhost', path='/'), - 'http://foo@localhost/': Url('http', auth='foo', host='localhost', path='/'), - 'http://foo:bar@baz@localhost/': Url('http', auth='foo:bar@baz', host='localhost', path='/'), - 'http://@': Url('http', host=None, auth='') - } + ('http://foo:bar@localhost/', Url('http', auth='foo:bar', host='localhost', path='/')), + ('http://foo@localhost/', Url('http', auth='foo', host='localhost', path='/')), + ('http://foo:bar@baz@localhost/', Url('http', + auth='foo:bar@baz', + host='localhost', + path='/')), + ('http://@', Url('http', host=None, auth='')) + ] non_round_tripping_parse_url_host_map = { # Path/query/fragment @@ -177,12 +195,13 @@ } def test_parse_url(self): - for url, expected_Url in chain(self.parse_url_host_map.items(), self.non_round_tripping_parse_url_host_map.items()): + for url, expected_Url in chain(self.parse_url_host_map, + self.non_round_tripping_parse_url_host_map.items()): returned_Url = parse_url(url) self.assertEqual(returned_Url, expected_Url) def test_unparse_url(self): - for url, expected_Url in self.parse_url_host_map.items(): + for url, expected_Url in self.parse_url_host_map: self.assertEqual(url, expected_Url.url) def test_parse_url_invalid_IPv6(self): @@ -256,6 +275,41 @@ make_headers(disable_cache=True), {'cache-control': 'no-cache'}) + def test_rewind_body(self): + body = io.BytesIO(b'test data') + self.assertEqual(body.read(), b'test data') + + # Assert the file object has been consumed + self.assertEqual(body.read(), b'') + + # Rewind it back to just be b'data' + rewind_body(body, 5) + self.assertEqual(body.read(), b'data') + + def test_rewind_body_failed_tell(self): + body = io.BytesIO(b'test data') + body.read() # Consume body + + # Simulate failed tell() + body_pos = _FAILEDTELL + self.assertRaises(UnrewindableBodyError, rewind_body, body, body_pos) + + def test_rewind_body_bad_position(self): + body = io.BytesIO(b'test data') + body.read() # Consume body + + # Pass non-integer position + self.assertRaises(ValueError, rewind_body, body, None) + self.assertRaises(ValueError, rewind_body, body, object()) + + def test_rewind_body_failed_seek(self): + class BadSeek(): + + def seek(self, pos, offset=0): + raise IOError + + self.assertRaises(UnrewindableBodyError, rewind_body, BadSeek(), 2) + def test_split_first(self): test_cases = { ('abcd', 'b'): ('a', 'cd', 'b'), @@ -269,7 +323,7 @@ self.assertEqual(output, expected) def test_add_stderr_logger(self): - handler = add_stderr_logger(level=logging.INFO) # Don't actually print debug + handler = add_stderr_logger(level=logging.INFO) # Don't actually print debug logger = logging.getLogger('urllib3') self.assertTrue(handler in logger.handlers) @@ -334,7 +388,6 @@ except ValueError as e: self.assertTrue('int, float or None' in str(e)) - @patch('urllib3.util.timeout.current_time') def test_timeout(self, current_time): timeout = Timeout(total=3) @@ -374,14 +427,12 @@ timeout = Timeout(5) self.assertEqual(timeout.total, 5) - def test_timeout_str(self): timeout = Timeout(connect=1, read=2, total=3) self.assertEqual(str(timeout), "Timeout(connect=1, read=2, total=3)") timeout = Timeout(connect=1, read=None, total=3) self.assertEqual(str(timeout), "Timeout(connect=1, read=None, total=3)") - @patch('urllib3.util.timeout.current_time') def test_timeout_elapsed(self, current_time): current_time.return_value = TIMEOUT_EPOCH diff -Nru python-urllib3-1.19.1/test/with_dummyserver/test_chunked_transfer.py python-urllib3-1.21.1/test/with_dummyserver/test_chunked_transfer.py --- python-urllib3-1.19.1/test/with_dummyserver/test_chunked_transfer.py 2016-11-03 15:16:16.000000000 +0000 +++ python-urllib3-1.21.1/test/with_dummyserver/test_chunked_transfer.py 2017-05-02 09:08:45.000000000 +0000 @@ -28,7 +28,8 @@ self.start_chunked_handler() chunks = ['foo', 'bar', '', 'bazzzzzzzzzzzzzzzzzzzzzz'] pool = HTTPConnectionPool(self.host, self.port, retries=False) - r = pool.urlopen('GET', '/', chunks, headers=dict(DNT='1'), chunked=True) + pool.urlopen('GET', '/', chunks, headers=dict(DNT='1'), chunked=True) + self.addCleanup(pool.close) self.assertTrue(b'Transfer-Encoding' in self.buffer) body = self.buffer.split(b'\r\n\r\n', 1)[1] @@ -42,7 +43,9 @@ def _test_body(self, data): self.start_chunked_handler() pool = HTTPConnectionPool(self.host, self.port, retries=False) - r = pool.urlopen('GET', '/', data, chunked=True) + self.addCleanup(pool.close) + + pool.urlopen('GET', '/', data, chunked=True) header, body = self.buffer.split(b'\r\n\r\n', 1) self.assertTrue(b'Transfer-Encoding: chunked' in header.split(b'\r\n')) @@ -79,6 +82,7 @@ self.start_chunked_handler() chunks = ['foo', 'bar', '', 'bazzzzzzzzzzzzzzzzzzzzzz'] pool = HTTPConnectionPool(self.host, self.port, retries=False) + self.addCleanup(pool.close) pool.urlopen( 'GET', '/', chunks, headers={'Host': 'test.org'}, chunked=True ) @@ -93,6 +97,7 @@ self.start_chunked_handler() chunks = ['foo', 'bar', '', 'bazzzzzzzzzzzzzzzzzzzzzz'] pool = HTTPConnectionPool(self.host, self.port, retries=False) + self.addCleanup(pool.close) pool.urlopen('GET', '/', chunks, chunked=True) header_block = self.buffer.split(b'\r\n\r\n', 1)[0].lower() diff -Nru python-urllib3-1.19.1/test/with_dummyserver/test_connectionpool.py python-urllib3-1.21.1/test/with_dummyserver/test_connectionpool.py --- python-urllib3-1.19.1/test/with_dummyserver/test_connectionpool.py 2016-11-03 15:16:16.000000000 +0000 +++ python-urllib3-1.21.1/test/with_dummyserver/test_connectionpool.py 2017-05-02 09:08:45.000000000 +0000 @@ -1,4 +1,4 @@ -import errno +import io import logging import socket import sys @@ -6,13 +6,9 @@ import time import warnings -from datetime import datetime -from datetime import timedelta - import mock from .. import ( - requires_network, onlyPy3, onlyPy26OrOlder, TARPIT_HOST, VALID_SOURCE_ADDRESSES, INVALID_SOURCE_ADDRESSES, ) from ..port_helpers import find_unused_port @@ -26,8 +22,8 @@ DecodeError, MaxRetryError, ReadTimeoutError, - ProtocolError, NewConnectionError, + UnrewindableBodyError, ) from urllib3.packages.six import b, u from urllib3.packages.six.moves.urllib.parse import urlencode @@ -45,7 +41,7 @@ SHORT_TIMEOUT = 0.001 -LONG_TIMEOUT = 0.01 +LONG_TIMEOUT = 0.03 def wait_for_socket(ready_event): @@ -61,13 +57,14 @@ # Pool-global timeout pool = HTTPConnectionPool(self.host, self.port, timeout=SHORT_TIMEOUT, retries=False) + self.addCleanup(pool.close) wait_for_socket(ready_event) self.assertRaises(ReadTimeoutError, pool.request, 'GET', '/') - block_event.set() # Release block + block_event.set() # Release block # Shouldn't raise this time wait_for_socket(ready_event) - block_event.set() # Pre-release block + block_event.set() # Pre-release block pool.request('GET', '/') def test_conn_closed(self): @@ -75,6 +72,7 @@ self.start_basic_handler(block_send=block_event, num=1) pool = HTTPConnectionPool(self.host, self.port, timeout=SHORT_TIMEOUT, retries=False) + self.addCleanup(pool.close) conn = pool._get_conn() pool._put_conn(conn) try: @@ -96,29 +94,32 @@ # Pool-global timeout timeout = Timeout(read=SHORT_TIMEOUT) pool = HTTPConnectionPool(self.host, self.port, timeout=timeout, retries=False) + self.addCleanup(pool.close) wait_for_socket(ready_event) conn = pool._get_conn() self.assertRaises(ReadTimeoutError, pool._make_request, conn, 'GET', '/') pool._put_conn(conn) - block_event.set() # Release request + block_event.set() # Release request wait_for_socket(ready_event) block_event.clear() self.assertRaises(ReadTimeoutError, pool.request, 'GET', '/') - block_event.set() # Release request + block_event.set() # Release request # Request-specific timeouts should raise errors pool = HTTPConnectionPool(self.host, self.port, timeout=LONG_TIMEOUT, retries=False) + self.addCleanup(pool.close) conn = pool._get_conn() wait_for_socket(ready_event) now = time.time() self.assertRaises(ReadTimeoutError, pool._make_request, conn, 'GET', '/', timeout=timeout) delta = time.time() - now - block_event.set() # Release request + block_event.set() # Release request - self.assertTrue(delta < LONG_TIMEOUT, "timeout was pool-level LONG_TIMEOUT rather than request-level SHORT_TIMEOUT") + message = "timeout was pool-level LONG_TIMEOUT rather than request-level SHORT_TIMEOUT" + self.assertTrue(delta < LONG_TIMEOUT, message) pool._put_conn(conn) wait_for_socket(ready_event) @@ -126,20 +127,24 @@ self.assertRaises(ReadTimeoutError, pool.request, 'GET', '/', timeout=timeout) delta = time.time() - now - self.assertTrue(delta < LONG_TIMEOUT, "timeout was pool-level LONG_TIMEOUT rather than request-level SHORT_TIMEOUT") - block_event.set() # Release request + message = "timeout was pool-level LONG_TIMEOUT rather than request-level SHORT_TIMEOUT" + self.assertTrue(delta < LONG_TIMEOUT, message) + block_event.set() # Release request # Timeout int/float passed directly to request and _make_request should # raise a request timeout wait_for_socket(ready_event) self.assertRaises(ReadTimeoutError, pool.request, 'GET', '/', timeout=SHORT_TIMEOUT) - block_event.set() # Release request + block_event.set() # Release request wait_for_socket(ready_event) conn = pool._new_conn() # FIXME: This assert flakes sometimes. Not sure why. - self.assertRaises(ReadTimeoutError, pool._make_request, conn, 'GET', '/', timeout=SHORT_TIMEOUT) - block_event.set() # Release request + self.assertRaises(ReadTimeoutError, + pool._make_request, + conn, 'GET', '/', + timeout=SHORT_TIMEOUT) + block_event.set() # Release request def test_connect_timeout(self): url = '/' @@ -148,6 +153,7 @@ # Pool-global timeout pool = HTTPConnectionPool(host, port, timeout=timeout) + self.addCleanup(pool.close) conn = pool._get_conn() self.assertRaises(ConnectTimeoutError, pool._make_request, conn, 'GET', url) @@ -158,8 +164,12 @@ # Request-specific connection timeouts big_timeout = Timeout(read=LONG_TIMEOUT, connect=LONG_TIMEOUT) pool = HTTPConnectionPool(host, port, timeout=big_timeout, retries=False) + self.addCleanup(pool.close) conn = pool._get_conn() - self.assertRaises(ConnectTimeoutError, pool._make_request, conn, 'GET', url, timeout=timeout) + self.assertRaises(ConnectTimeoutError, + pool._make_request, + conn, 'GET', url, + timeout=timeout) pool._put_conn(conn) self.assertRaises(ConnectTimeoutError, pool.request, 'GET', url, timeout=timeout) @@ -169,11 +179,13 @@ timeout = Timeout(total=None, connect=SHORT_TIMEOUT) pool = HTTPConnectionPool(host, port, timeout=timeout) + self.addCleanup(pool.close) conn = pool._get_conn() self.assertRaises(ConnectTimeoutError, pool._make_request, conn, 'GET', '/') timeout = Timeout(connect=3, read=5, total=SHORT_TIMEOUT) pool = HTTPConnectionPool(host, port, timeout=timeout) + self.addCleanup(pool.close) conn = pool._get_conn() self.assertRaises(ConnectTimeoutError, pool._make_request, conn, 'GET', '/') @@ -185,6 +197,7 @@ # This will get the socket to raise an EAGAIN on the read timeout = Timeout(connect=3, read=SHORT_TIMEOUT) pool = HTTPConnectionPool(self.host, self.port, timeout=timeout, retries=False) + self.addCleanup(pool.close) self.assertRaises(ReadTimeoutError, pool.request, 'GET', '/') block_event.set() @@ -194,11 +207,13 @@ # The connect should succeed and this should hit the read timeout timeout = Timeout(connect=3, read=5, total=SHORT_TIMEOUT) pool = HTTPConnectionPool(self.host, self.port, timeout=timeout, retries=False) + self.addCleanup(pool.close) self.assertRaises(ReadTimeoutError, pool.request, 'GET', '/') def test_create_connection_timeout(self): timeout = Timeout(connect=SHORT_TIMEOUT, total=LONG_TIMEOUT) pool = HTTPConnectionPool(TARPIT_HOST, self.port, timeout=timeout, retries=False) + self.addCleanup(pool.close) conn = pool._new_conn() self.assertRaises(ConnectTimeoutError, conn.connect) @@ -207,15 +222,16 @@ def setUp(self): self.pool = HTTPConnectionPool(self.host, self.port) + self.addCleanup(self.pool.close) def test_get(self): r = self.pool.request('GET', '/specific_method', - fields={'method': 'GET'}) + fields={'method': 'GET'}) self.assertEqual(r.status, 200, r.data) def test_post_url(self): r = self.pool.request('POST', '/specific_method', - fields={'method': 'POST'}) + fields={'method': 'POST'}) self.assertEqual(r.status, 200, r.data) def test_urlopen_put(self): @@ -225,11 +241,11 @@ def test_wrong_specific_method(self): # To make sure the dummy server is actually returning failed responses r = self.pool.request('GET', '/specific_method', - fields={'method': 'POST'}) + fields={'method': 'POST'}) self.assertEqual(r.status, 400, r.data) r = self.pool.request('POST', '/specific_method', - fields={'method': 'GET'}) + fields={'method': 'GET'}) self.assertEqual(r.status, 400, r.data) def test_upload(self): @@ -284,8 +300,8 @@ def test_nagle(self): """ Test that connections have TCP_NODELAY turned on """ - # This test needs to be here in order to be run. socket.create_connection actually tries to - # connect to the host provided so we need a dummyserver to be running. + # This test needs to be here in order to be run. socket.create_connection actually tries + # to connect to the host provided so we need a dummyserver to be running. pool = HTTPConnectionPool(self.host, self.port) conn = pool._get_conn() pool._make_request(conn, 'GET', '/') @@ -306,8 +322,8 @@ def test_disable_default_socket_options(self): """Test that passing None disables all socket options.""" - # This test needs to be here in order to be run. socket.create_connection actually tries to - # connect to the host provided so we need a dummyserver to be running. + # This test needs to be here in order to be run. socket.create_connection actually tries + # to connect to the host provided so we need a dummyserver to be running. pool = HTTPConnectionPool(self.host, self.port, socket_options=None) s = pool._new_conn()._new_conn() using_nagle = s.getsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY) == 0 @@ -316,8 +332,8 @@ def test_defaults_are_applied(self): """Test that modifying the default socket options works.""" - # This test needs to be here in order to be run. socket.create_connection actually tries to - # connect to the host provided so we need a dummyserver to be running. + # This test needs to be here in order to be run. socket.create_connection actually tries + # to connect to the host provided so we need a dummyserver to be running. pool = HTTPConnectionPool(self.host, self.port) # Get the HTTPConnection instance conn = pool._new_conn() @@ -361,7 +377,7 @@ conn = pool._get_conn() try: conn.set_tunnel(self.host, self.port) - except AttributeError: # python 2.6 + except AttributeError: # python 2.6 conn._set_tunnel(self.host, self.port) conn._tunnel = mock.Mock(return_value=None) @@ -460,8 +476,8 @@ def test_post_with_multipart(self): data = {'banana': 'hammock', 'lol': 'cat'} r = self.pool.request('POST', '/echo', - fields=data, - encode_multipart=True) + fields=data, + encode_multipart=True) body = r.data.split(b'\r\n') encoded_data = encode_multipart_formdata(data)[0] @@ -484,13 +500,13 @@ def test_check_gzip(self): r = self.pool.request('GET', '/encodingrequest', - headers={'accept-encoding': 'gzip'}) + headers={'accept-encoding': 'gzip'}) self.assertEqual(r.headers.get('content-encoding'), 'gzip') self.assertEqual(r.data, b'hello, world!') def test_check_deflate(self): r = self.pool.request('GET', '/encodingrequest', - headers={'accept-encoding': 'deflate'}) + headers={'accept-encoding': 'deflate'}) self.assertEqual(r.headers.get('content-encoding'), 'deflate') self.assertEqual(r.data, b'hello, world!') @@ -551,13 +567,16 @@ req2_data = {'count': 'b' * payload_size} resp2_data = encode_multipart_formdata(req2_data, boundary=boundary)[0] - r1 = pool.request('POST', '/echo', fields=req_data, multipart_boundary=boundary, preload_content=False) + r1 = pool.request('POST', '/echo', + fields=req_data, + multipart_boundary=boundary, + preload_content=False) self.assertEqual(r1.read(first_chunk), resp_data[:first_chunk]) try: r2 = pool.request('POST', '/echo', fields=req2_data, multipart_boundary=boundary, - preload_content=False, pool_timeout=0.001) + preload_content=False, pool_timeout=0.001) # This branch should generally bail here, but maybe someday it will # work? Perhaps by some sort of magic. Consider it a TODO. @@ -575,7 +594,7 @@ self.assertEqual(pool.num_connections, 1) def test_for_double_release(self): - MAXSIZE=5 + MAXSIZE = 5 # Check default state pool = HTTPConnectionPool(self.host, self.port, maxsize=MAXSIZE) @@ -605,7 +624,7 @@ self.assertEqual(pool.pool.qsize(), MAXSIZE-2) def test_release_conn_parameter(self): - MAXSIZE=5 + MAXSIZE = 5 pool = HTTPConnectionPool(self.host, self.port, maxsize=MAXSIZE) self.assertEqual(pool.pool.qsize(), MAXSIZE) @@ -624,7 +643,7 @@ NoIPv6Warning) continue pool = HTTPConnectionPool(self.host, self.port, - source_address=addr, retries=False) + source_address=addr, retries=False) r = pool.request('GET', '/source_address') self.assertEqual(r.data, b(addr[0])) @@ -632,7 +651,9 @@ for addr in INVALID_SOURCE_ADDRESSES: pool = HTTPConnectionPool(self.host, self.port, source_address=addr, retries=False) # FIXME: This assert flakes sometimes. Not sure why. - self.assertRaises(NewConnectionError, pool.request, 'GET', '/source_address?{0}'.format(addr)) + self.assertRaises(NewConnectionError, + pool.request, + 'GET', '/source_address?{0}'.format(addr)) def test_stream_keepalive(self): x = 2 @@ -676,15 +697,23 @@ # url. We won't get a response for this and so the # conn won't be implicitly returned to the pool. self.assertRaises(MaxRetryError, - http.request, 'GET', '/redirect', fields={'target': '/'}, release_conn=False, retries=0) - - r = http.request('GET', '/redirect', fields={'target': '/'}, release_conn=False, retries=1) + http.request, + 'GET', '/redirect', + fields={'target': '/'}, release_conn=False, retries=0) + + r = http.request('GET', '/redirect', + fields={'target': '/'}, + release_conn=False, + retries=1) r.release_conn() # the pool should still contain poolsize elements self.assertEqual(http.pool.qsize(), http.pool.maxsize) - + def test_mixed_case_hostname(self): + pool = HTTPConnectionPool("LoCaLhOsT", self.port) + response = pool.request('GET', "http://LoCaLhOsT:%d/" % self.port) + self.assertEqual(response.status, 200) class TestRetry(HTTPDummyServerTestCase): @@ -694,8 +723,8 @@ def test_max_retry(self): try: r = self.pool.request('GET', '/redirect', - fields={'target': '/'}, - retries=0) + fields={'target': '/'}, + retries=0) self.fail("Failed to raise MaxRetryError exception, returned %r" % r.status) except MaxRetryError: pass @@ -780,15 +809,19 @@ headers=headers, retries=retry) self.assertEqual(resp.status, 200) self.assertEqual(resp.retries.total, 1) - self.assertEqual(resp.retries.history, (RequestHistory('GET', '/successful_retry', None, 418, None),)) + self.assertEqual(resp.retries.history, + (RequestHistory('GET', '/successful_retry', None, 418, None),)) def test_retry_redirect_history(self): resp = self.pool.request('GET', '/redirect', fields={'target': '/'}) self.assertEqual(resp.status, 200) - self.assertEqual(resp.retries.history, (RequestHistory('GET', '/redirect?target=%2F', None, 303, '/'),)) + self.assertEqual(resp.retries.history, + (RequestHistory('GET', '/redirect?target=%2F', None, 303, '/'),)) def test_multi_redirect_history(self): - r = self.pool.request('GET', '/multi_redirect', fields={'redirect_codes': '303,302,200'}, redirect=False) + r = self.pool.request('GET', '/multi_redirect', + fields={'redirect_codes': '303,302,200'}, + redirect=False) self.assertEqual(r.status, 303) self.assertEqual(r.retries.history, tuple()) @@ -796,13 +829,14 @@ fields={'redirect_codes': '303,302,301,307,302,200'}) self.assertEqual(r.status, 200) self.assertEqual(r.data, b'Done redirecting') - self.assertEqual([(request_history.status, request_history.redirect_location) for request_history in r.retries.history], [ - (303, '/multi_redirect?redirect_codes=302,301,307,302,200'), - (302, '/multi_redirect?redirect_codes=301,307,302,200'), - (301, '/multi_redirect?redirect_codes=307,302,200'), - (307, '/multi_redirect?redirect_codes=302,200'), - (302, '/multi_redirect?redirect_codes=200') - ]) + + expected = [(303, '/multi_redirect?redirect_codes=302,301,307,302,200'), + (302, '/multi_redirect?redirect_codes=301,307,302,200'), + (301, '/multi_redirect?redirect_codes=307,302,200'), + (307, '/multi_redirect?redirect_codes=302,200'), + (302, '/multi_redirect?redirect_codes=200')] + actual = [(history.status, history.redirect_location) for history in r.retries.history] + self.assertEqual(actual, expected) class TestRetryAfter(HTTPDummyServerTestCase): @@ -812,37 +846,37 @@ def test_retry_after(self): # Request twice in a second to get a 429 response. r = self.pool.request('GET', '/retry_after', - fields={'status': '429 Too Many Requests'}, - retries=False) + fields={'status': '429 Too Many Requests'}, + retries=False) r = self.pool.request('GET', '/retry_after', - fields={'status': '429 Too Many Requests'}, - retries=False) + fields={'status': '429 Too Many Requests'}, + retries=False) self.assertEqual(r.status, 429) r = self.pool.request('GET', '/retry_after', - fields={'status': '429 Too Many Requests'}, - retries=True) + fields={'status': '429 Too Many Requests'}, + retries=True) self.assertEqual(r.status, 200) # Request twice in a second to get a 503 response. r = self.pool.request('GET', '/retry_after', - fields={'status': '503 Service Unavailable'}, - retries=False) + fields={'status': '503 Service Unavailable'}, + retries=False) r = self.pool.request('GET', '/retry_after', - fields={'status': '503 Service Unavailable'}, - retries=False) + fields={'status': '503 Service Unavailable'}, + retries=False) self.assertEqual(r.status, 503) r = self.pool.request('GET', '/retry_after', - fields={'status': '503 Service Unavailable'}, - retries=True) + fields={'status': '503 Service Unavailable'}, + retries=True) self.assertEqual(r.status, 200) # Ignore Retry-After header on status which is not defined in # Retry.RETRY_AFTER_STATUS_CODES. r = self.pool.request('GET', '/retry_after', - fields={'status': "418 I'm a teapot"}, - retries=True) + fields={'status': "418 I'm a teapot"}, + retries=True) self.assertEqual(r.status, 418) def test_redirect_after(self): @@ -870,5 +904,63 @@ self.assertEqual(r.status, 200) self.assertTrue(delta < 1) + +class TestFileBodiesOnRetryOrRedirect(HTTPDummyServerTestCase): + def setUp(self): + self.pool = HTTPConnectionPool(self.host, self.port, timeout=0.1) + + def test_retries_put_filehandle(self): + """HTTP PUT retry with a file-like object should not timeout""" + retry = Retry(total=3, status_forcelist=[418]) + # httplib reads in 8k chunks; use a larger content length + content_length = 65535 + data = b'A' * content_length + uploaded_file = io.BytesIO(data) + headers = {'test-name': 'test_retries_put_filehandle', + 'Content-Length': str(content_length)} + resp = self.pool.urlopen('PUT', '/successful_retry', + headers=headers, + retries=retry, + body=uploaded_file, + assert_same_host=False, redirect=False) + self.assertEqual(resp.status, 200) + + def test_redirect_put_file(self): + """PUT with file object should work with a redirection response""" + retry = Retry(total=3, status_forcelist=[418]) + # httplib reads in 8k chunks; use a larger content length + content_length = 65535 + data = b'A' * content_length + uploaded_file = io.BytesIO(data) + headers = {'test-name': 'test_redirect_put_file', + 'Content-Length': str(content_length)} + url = '/redirect?target=/echo&status=307' + resp = self.pool.urlopen('PUT', url, + headers=headers, + retries=retry, + body=uploaded_file, + assert_same_host=False, redirect=True) + self.assertEqual(resp.status, 200) + self.assertEqual(resp.data, data) + + def test_redirect_with_failed_tell(self): + """Abort request if failed to get a position from tell()""" + class BadTellObject(io.BytesIO): + + def tell(self): + raise IOError + + body = BadTellObject(b'the data') + url = '/redirect?target=/successful_retry' + # httplib uses fileno if Content-Length isn't supplied, + # which is unsupported by BytesIO. + headers = {'Content-Length': '8'} + try: + self.pool.urlopen('PUT', url, headers=headers, body=body) + self.fail('PUT successful despite failed rewind.') + except UnrewindableBodyError as e: + self.assertTrue('Unable to record file position for' in str(e)) + + if __name__ == '__main__': unittest.main() diff -Nru python-urllib3-1.19.1/test/with_dummyserver/test_https.py python-urllib3-1.21.1/test/with_dummyserver/test_https.py --- python-urllib3-1.19.1/test/with_dummyserver/test_https.py 2016-11-03 15:16:16.000000000 +0000 +++ python-urllib3-1.21.1/test/with_dummyserver/test_https.py 2017-05-02 09:08:45.000000000 +0000 @@ -19,9 +19,10 @@ from test import ( onlyPy26OrOlder, onlyPy279OrNewer, + notSecureTransport, + onlyPy27OrNewerOrNonWindows, requires_network, TARPIT_HOST, - clear_warnings, ) from urllib3 import HTTPSConnectionPool from urllib3.connection import ( @@ -31,7 +32,6 @@ ) from urllib3.exceptions import ( SSLError, - ReadTimeoutError, ConnectTimeoutError, InsecureRequestWarning, SystemTimeWarning, @@ -52,10 +52,10 @@ log.addHandler(logging.StreamHandler(sys.stdout)) - class TestHTTPS(HTTPSDummyServerTestCase): def setUp(self): self._pool = HTTPSConnectionPool(self.host, self.port) + self.addCleanup(self._pool.close) def test_simple(self): r = self._pool.request('GET', '/') @@ -96,6 +96,7 @@ ctx.load_verify_locations(cafile=DEFAULT_CA) https_pool = HTTPSConnectionPool(self.host, self.port, ssl_context=ctx) + self.addCleanup(https_pool.close) conn = https_pool._new_conn() self.assertEqual(conn.__class__, VerifiedHTTPSConnection) @@ -122,6 +123,7 @@ https_pool = HTTPSConnectionPool(self.host, self.port, ca_certs=DEFAULT_CA, ssl_context=ctx) + self.addCleanup(https_pool.close) conn = https_pool._new_conn() self.assertEqual(conn.__class__, VerifiedHTTPSConnection) @@ -144,10 +146,12 @@ self.assertEqual(error, InsecurePlatformWarning) @onlyPy279OrNewer + @notSecureTransport def test_ca_dir_verified(self): https_pool = HTTPSConnectionPool(self.host, self.port, cert_reqs='CERT_REQUIRED', ca_cert_dir=DEFAULT_CA_DIR) + self.addCleanup(https_pool.close) conn = https_pool._new_conn() self.assertEqual(conn.__class__, VerifiedHTTPSConnection) @@ -168,16 +172,22 @@ https_pool = HTTPSConnectionPool('127.0.0.1', self.port, cert_reqs='CERT_REQUIRED', ca_certs=DEFAULT_CA) + self.addCleanup(https_pool.close) + try: https_pool.request('GET', '/') self.fail("Didn't raise SSL invalid common name") except SSLError as e: - self.assertTrue("doesn't match" in str(e)) + self.assertTrue( + "doesn't match" in str(e) or + "certificate verify failed" in str(e) + ) def test_verified_with_bad_ca_certs(self): https_pool = HTTPSConnectionPool(self.host, self.port, cert_reqs='CERT_REQUIRED', ca_certs=DEFAULT_CA_BAD) + self.addCleanup(https_pool.close) try: https_pool.request('GET', '/') @@ -191,6 +201,7 @@ # default is cert_reqs=None which is ssl.CERT_NONE https_pool = HTTPSConnectionPool(self.host, self.port, cert_reqs='CERT_REQUIRED') + self.addCleanup(https_pool.close) try: https_pool.request('GET', '/') @@ -200,14 +211,17 @@ # there is a different error message depending on whether or # not pyopenssl is injected self.assertTrue('No root certificates specified' in str(e) or - 'certificate verify failed' in str(e), - "Expected 'No root certificates specified' or " - "'certificate verify failed', " + 'certificate verify failed' in str(e) or + 'invalid certificate chain' in str(e), + "Expected 'No root certificates specified', " + "'certificate verify failed', or " + "'invalid certificate chain', " "instead got: %r" % e) def test_no_ssl(self): pool = HTTPSConnectionPool(self.host, self.port) pool.ConnectionCls = None + self.addCleanup(pool.close) self.assertRaises(SSLError, pool._new_conn) self.assertRaises(SSLError, pool.request, 'GET', '/') @@ -215,6 +229,7 @@ """ Test that bare HTTPSConnection can connect, make requests """ pool = HTTPSConnectionPool(self.host, self.port) pool.ConnectionCls = UnverifiedHTTPSConnection + self.addCleanup(pool.close) with mock.patch('warnings.warn') as warn: r = pool.request('GET', '/') @@ -237,6 +252,7 @@ pool = HTTPSConnectionPool(self.host, self.port, cert_reqs='CERT_NONE', ca_certs=DEFAULT_CA_BAD) + self.addCleanup(pool.close) with mock.patch('warnings.warn') as warn: r = pool.request('GET', '/') @@ -259,6 +275,7 @@ https_pool = HTTPSConnectionPool('localhost', self.port, cert_reqs='CERT_REQUIRED', ca_certs=DEFAULT_CA) + self.addCleanup(https_pool.close) https_pool.assert_hostname = False https_pool.request('GET', '/') @@ -267,6 +284,7 @@ https_pool = HTTPSConnectionPool('localhost', self.port, cert_reqs='CERT_REQUIRED', ca_certs=DEFAULT_CA) + self.addCleanup(https_pool.close) https_pool.assert_hostname = 'localhost' https_pool.request('GET', '/') @@ -275,6 +293,7 @@ https_pool = HTTPSConnectionPool('localhost', self.port, cert_reqs='CERT_REQUIRED', ca_certs=DEFAULT_CA) + self.addCleanup(https_pool.close) https_pool.assert_fingerprint = 'F2:06:5A:42:10:3F:45:1C:17:FE:E6:' \ '07:1E:8A:86:E5' @@ -285,6 +304,7 @@ https_pool = HTTPSConnectionPool('localhost', self.port, cert_reqs='CERT_REQUIRED', ca_certs=DEFAULT_CA) + self.addCleanup(https_pool.close) https_pool.assert_fingerprint = '92:81:FE:85:F7:0C:26:60:EC:D6:B3:' \ 'BF:93:CF:F9:71:CC:07:7D:0A' @@ -294,6 +314,7 @@ https_pool = HTTPSConnectionPool('localhost', self.port, cert_reqs='CERT_REQUIRED', ca_certs=DEFAULT_CA) + self.addCleanup(https_pool.close) https_pool.assert_fingerprint = ('C5:4D:0B:83:84:89:2E:AE:B4:58:BB:12:' 'F7:A6:C4:76:05:03:88:D8:57:65:51:F3:' @@ -304,6 +325,7 @@ https_pool = HTTPSConnectionPool('127.0.0.1', self.port, cert_reqs='CERT_REQUIRED', ca_certs=DEFAULT_CA) + self.addCleanup(https_pool.close) https_pool.assert_fingerprint = 'AA:AA:AA:AA:AA:AAAA:AA:AAAA:AA:' \ 'AA:AA:AA:AA:AA:AA:AA:AA:AA' @@ -324,6 +346,7 @@ https_pool = HTTPSConnectionPool('127.0.0.1', self.port, cert_reqs='CERT_NONE', ca_certs=DEFAULT_CA_BAD) + self.addCleanup(https_pool.close) https_pool.assert_fingerprint = 'AA:AA:AA:AA:AA:AAAA:AA:AAAA:AA:' \ 'AA:AA:AA:AA:AA:AA:AA:AA:AA' @@ -333,15 +356,22 @@ https_pool = HTTPSConnectionPool('127.0.0.1', self.port, cert_reqs='CERT_NONE', ca_certs=DEFAULT_CA_BAD) + self.addCleanup(https_pool.close) https_pool.assert_fingerprint = '92:81:FE:85:F7:0C:26:60:EC:D6:B3:' \ 'BF:93:CF:F9:71:CC:07:7D:0A' https_pool.request('GET', '/') + @notSecureTransport def test_good_fingerprint_and_hostname_mismatch(self): + # This test doesn't run with SecureTransport because we don't turn off + # hostname validation without turning off all validation, which this + # test doesn't do (deliberately). We should revisit this if we make + # new decisions. https_pool = HTTPSConnectionPool('127.0.0.1', self.port, cert_reqs='CERT_REQUIRED', ca_certs=DEFAULT_CA) + self.addCleanup(https_pool.close) https_pool.assert_fingerprint = '92:81:FE:85:F7:0C:26:60:EC:D6:B3:' \ 'BF:93:CF:F9:71:CC:07:7D:0A' @@ -353,17 +383,20 @@ https_pool = HTTPSConnectionPool(TARPIT_HOST, self.port, timeout=timeout, retries=False, cert_reqs='CERT_REQUIRED') + self.addCleanup(https_pool.close) timeout = Timeout(total=None, connect=0.001) https_pool = HTTPSConnectionPool(TARPIT_HOST, self.port, timeout=timeout, retries=False, cert_reqs='CERT_REQUIRED') + self.addCleanup(https_pool.close) self.assertRaises(ConnectTimeoutError, https_pool.request, 'GET', '/') timeout = Timeout(read=0.001) https_pool = HTTPSConnectionPool(self.host, self.port, timeout=timeout, retries=False, cert_reqs='CERT_REQUIRED') + self.addCleanup(https_pool.close) https_pool.ca_certs = DEFAULT_CA https_pool.assert_fingerprint = '92:81:FE:85:F7:0C:26:60:EC:D6:B3:' \ 'BF:93:CF:F9:71:CC:07:7D:0A' @@ -371,6 +404,7 @@ timeout = Timeout(total=None) https_pool = HTTPSConnectionPool(self.host, self.port, timeout=timeout, cert_reqs='CERT_NONE') + self.addCleanup(https_pool.close) https_pool.request('GET', '/') def test_tunnel(self): @@ -378,10 +412,11 @@ timeout = Timeout(total=None) https_pool = HTTPSConnectionPool(self.host, self.port, timeout=timeout, cert_reqs='CERT_NONE') + self.addCleanup(https_pool.close) conn = https_pool._new_conn() try: conn.set_tunnel(self.host, self.port) - except AttributeError: # python 2.6 + except AttributeError: # python 2.6 conn._set_tunnel(self.host, self.port) conn._tunnel = mock.Mock() https_pool._make_request(conn, 'GET', '/') @@ -406,6 +441,7 @@ timeout=timeout, retries=False, cert_reqs=cert_reqs) + self.addCleanup(https_pool.close) return https_pool https_pool = new_pool(Timeout(connect=0.001)) @@ -429,8 +465,10 @@ conn = VerifiedHTTPSConnection(self.host, self.port) https_pool = HTTPSConnectionPool(self.host, self.port, - cert_reqs='CERT_REQUIRED', ca_certs=DEFAULT_CA, - assert_fingerprint=fingerprint) + cert_reqs='CERT_REQUIRED', + ca_certs=DEFAULT_CA, + assert_fingerprint=fingerprint) + self.addCleanup(https_pool.close) https_pool._make_request(conn, 'GET', '/') @@ -469,8 +507,13 @@ def setUp(self): self._pool = HTTPSConnectionPool(self.host, self.port) + self.addCleanup(self._pool.close) + @onlyPy27OrNewerOrNonWindows def test_discards_connection_on_sslerror(self): + # This test is skipped on Windows for Python 2.6 because we suspect there + # is an issue with the OpenSSL for Python 2.6 on Windows. + self._pool.cert_reqs = 'CERT_REQUIRED' self.assertRaises(SSLError, self._pool.request, 'GET', '/') self._pool.ca_certs = DEFAULT_CA @@ -492,6 +535,7 @@ https_pool = HTTPSConnectionPool(self.host, self.port, cert_reqs='CERT_REQUIRED', ca_certs=NO_SAN_CA) + self.addCleanup(https_pool.close) r = https_pool.request('GET', '/') self.assertEqual(r.status, 200) self.assertTrue(warn.called) @@ -502,9 +546,15 @@ def test_can_validate_ip_san(self): """Ensure that urllib3 can validate SANs with IP addresses in them.""" + try: + import ipaddress # noqa: F401 + except ImportError: + raise SkipTest("Only runs on systems with an ipaddress module") + https_pool = HTTPSConnectionPool('127.0.0.1', self.port, cert_reqs='CERT_REQUIRED', ca_certs=DEFAULT_CA) + self.addCleanup(https_pool.close) r = https_pool.request('GET', '/') self.assertEqual(r.status, 200) @@ -519,6 +569,7 @@ https_pool = HTTPSConnectionPool('[::1]', self.port, cert_reqs='CERT_REQUIRED', ca_certs=IPV6_ADDR_CA) + self.addCleanup(https_pool.close) r = https_pool.request('GET', '/') self.assertEqual(r.status, 200) diff -Nru python-urllib3-1.19.1/test/with_dummyserver/test_no_ssl.py python-urllib3-1.21.1/test/with_dummyserver/test_no_ssl.py --- python-urllib3-1.19.1/test/with_dummyserver/test_no_ssl.py 2015-05-31 16:43:11.000000000 +0000 +++ python-urllib3-1.21.1/test/with_dummyserver/test_no_ssl.py 2017-05-02 09:08:45.000000000 +0000 @@ -8,21 +8,21 @@ from dummyserver.testcase import ( HTTPDummyServerTestCase, HTTPSDummyServerTestCase) +import urllib3 + class TestHTTPWithoutSSL(HTTPDummyServerTestCase, TestWithoutSSL): def test_simple(self): - import urllib3 - pool = urllib3.HTTPConnectionPool(self.host, self.port) + self.addCleanup(pool.close) r = pool.request('GET', '/') self.assertEqual(r.status, 200, r.data) class TestHTTPSWithoutSSL(HTTPSDummyServerTestCase, TestWithoutSSL): def test_simple(self): - import urllib3 - pool = urllib3.HTTPSConnectionPool(self.host, self.port) + self.addCleanup(pool.close) try: pool.request('GET', '/') except urllib3.exceptions.SSLError as e: diff -Nru python-urllib3-1.19.1/test/with_dummyserver/test_poolmanager.py python-urllib3-1.21.1/test/with_dummyserver/test_poolmanager.py --- python-urllib3-1.19.1/test/with_dummyserver/test_poolmanager.py 2016-09-12 09:02:32.000000000 +0000 +++ python-urllib3-1.21.1/test/with_dummyserver/test_poolmanager.py 2017-05-02 09:08:45.000000000 +0000 @@ -7,7 +7,7 @@ IPv6HTTPDummyServerTestCase) from urllib3.poolmanager import PoolManager from urllib3.connectionpool import port_by_scheme -from urllib3.exceptions import MaxRetryError, SSLError +from urllib3.exceptions import MaxRetryError from urllib3.util.retry import Retry @@ -19,6 +19,7 @@ def test_redirect(self): http = PoolManager() + self.addCleanup(http.clear) r = http.request('GET', '%s/redirect' % self.base_url, fields={'target': '%s/' % self.base_url}, @@ -34,6 +35,7 @@ def test_redirect_twice(self): http = PoolManager() + self.addCleanup(http.clear) r = http.request('GET', '%s/redirect' % self.base_url, fields={'target': '%s/redirect' % self.base_url}, @@ -42,28 +44,31 @@ self.assertEqual(r.status, 303) r = http.request('GET', '%s/redirect' % self.base_url, - fields={'target': '%s/redirect?target=%s/' % (self.base_url, self.base_url)}) + fields={'target': '%s/redirect?target=%s/' % (self.base_url, + self.base_url)}) self.assertEqual(r.status, 200) self.assertEqual(r.data, b'Dummy server!') def test_redirect_to_relative_url(self): http = PoolManager() + self.addCleanup(http.clear) r = http.request('GET', '%s/redirect' % self.base_url, - fields = {'target': '/redirect'}, - redirect = False) + fields={'target': '/redirect'}, + redirect=False) self.assertEqual(r.status, 303) r = http.request('GET', '%s/redirect' % self.base_url, - fields = {'target': '/redirect'}) + fields={'target': '/redirect'}) self.assertEqual(r.status, 200) self.assertEqual(r.data, b'Dummy server!') def test_cross_host_redirect(self): http = PoolManager() + self.addCleanup(http.clear) cross_host_location = '%s/echo?a=b' % self.base_url_alt try: @@ -83,10 +88,12 @@ def test_too_many_redirects(self): http = PoolManager() + self.addCleanup(http.clear) try: r = http.request('GET', '%s/redirect' % self.base_url, - fields={'target': '%s/redirect?target=%s/' % (self.base_url, self.base_url)}, + fields={'target': '%s/redirect?target=%s/' % (self.base_url, + self.base_url)}, retries=1) self.fail("Failed to raise MaxRetryError exception, returned %r" % r.status) except MaxRetryError: @@ -94,7 +101,8 @@ try: r = http.request('GET', '%s/redirect' % self.base_url, - fields={'target': '%s/redirect?target=%s/' % (self.base_url, self.base_url)}, + fields={'target': '%s/redirect?target=%s/' % (self.base_url, + self.base_url)}, retries=Retry(total=None, redirect=1)) self.fail("Failed to raise MaxRetryError exception, returned %r" % r.status) except MaxRetryError: @@ -102,15 +110,18 @@ def test_raise_on_redirect(self): http = PoolManager() + self.addCleanup(http.clear) r = http.request('GET', '%s/redirect' % self.base_url, - fields={'target': '%s/redirect?target=%s/' % (self.base_url, self.base_url)}, + fields={'target': '%s/redirect?target=%s/' % (self.base_url, + self.base_url)}, retries=Retry(total=None, redirect=1, raise_on_redirect=False)) self.assertEqual(r.status, 303) def test_raise_on_status(self): http = PoolManager() + self.addCleanup(http.clear) try: # the default is to raise @@ -125,7 +136,9 @@ # raise explicitly r = http.request('GET', '%s/status' % self.base_url, fields={'status': '500 Internal Server Error'}, - retries=Retry(total=1, status_forcelist=range(500, 600), raise_on_status=True)) + retries=Retry(total=1, + status_forcelist=range(500, 600), + raise_on_status=True)) self.fail("Failed to raise MaxRetryError exception, returned %r" % r.status) except MaxRetryError: pass @@ -133,7 +146,9 @@ # don't raise r = http.request('GET', '%s/status' % self.base_url, fields={'status': '500 Internal Server Error'}, - retries=Retry(total=1, status_forcelist=range(500, 600), raise_on_status=False)) + retries=Retry(total=1, + status_forcelist=range(500, 600), + raise_on_status=False)) self.assertEqual(r.status, 500) @@ -142,6 +157,7 @@ # will all such URLs fail with an error? http = PoolManager() + self.addCleanup(http.clear) # By globally adjusting `port_by_scheme` we pretend for a moment # that HTTP's default port is not 80, but is the port at which @@ -157,6 +173,7 @@ def test_headers(self): http = PoolManager(headers={'Foo': 'bar'}) + self.addCleanup(http.clear) r = http.request('GET', '%s/headers' % self.base_url) returned_headers = json.loads(r.data.decode()) @@ -186,6 +203,7 @@ def test_http_with_ssl_keywords(self): http = PoolManager(ca_certs='REQUIRED') + self.addCleanup(http.clear) r = http.request('GET', 'http://%s:%s/' % (self.host, self.port)) self.assertEqual(r.status, 200) @@ -206,7 +224,9 @@ def test_ipv6(self): http = PoolManager() + self.addCleanup(http.clear) http.request('GET', self.base_url) + if __name__ == '__main__': unittest.main() diff -Nru python-urllib3-1.19.1/test/with_dummyserver/test_proxy_poolmanager.py python-urllib3-1.21.1/test/with_dummyserver/test_proxy_poolmanager.py --- python-urllib3-1.19.1/test/with_dummyserver/test_proxy_poolmanager.py 2016-10-12 16:41:52.000000000 +0000 +++ python-urllib3-1.21.1/test/with_dummyserver/test_proxy_poolmanager.py 2017-05-02 09:08:45.000000000 +0000 @@ -7,7 +7,7 @@ from dummyserver.testcase import HTTPDummyProxyTestCase, IPv6HTTPDummyProxyTestCase from dummyserver.server import ( DEFAULT_CA, DEFAULT_CA_BAD, get_unreachable_address) -from .. import TARPIT_HOST +from .. import TARPIT_HOST, requires_network from urllib3._collections import HTTPHeaderDict from urllib3.poolmanager import proxy_from_url, ProxyManager @@ -29,6 +29,7 @@ def test_basic_proxy(self): http = proxy_from_url(self.proxy_url) + self.addCleanup(http.clear) r = http.request('GET', '%s/' % self.http_url) self.assertEqual(r.status, 200) @@ -39,6 +40,7 @@ def test_nagle_proxy(self): """ Test that proxy connections do not have TCP_NODELAY turned on """ http = proxy_from_url(self.proxy_url) + self.addCleanup(http.clear) hc2 = http.connection_from_host(self.http_host, self.http_port) conn = hc2._get_conn() hc2._make_request(conn, 'GET', '/') @@ -50,6 +52,7 @@ def test_proxy_conn_fail(self): host, port = get_unreachable_address() http = proxy_from_url('http://%s:%s/' % (host, port), retries=1, timeout=0.05) + self.addCleanup(http.clear) self.assertRaises(MaxRetryError, http.request, 'GET', '%s/' % self.https_url) self.assertRaises(MaxRetryError, http.request, 'GET', @@ -63,6 +66,7 @@ def test_oldapi(self): http = ProxyManager(connection_from_url(self.proxy_url)) + self.addCleanup(http.clear) r = http.request('GET', '%s/' % self.http_url) self.assertEqual(r.status, 200) @@ -73,6 +77,7 @@ def test_proxy_verified(self): http = proxy_from_url(self.proxy_url, cert_reqs='REQUIRED', ca_certs=DEFAULT_CA_BAD) + self.addCleanup(http.clear) https_pool = http._new_pool('https', self.https_host, self.https_port) try: @@ -104,6 +109,7 @@ def test_redirect(self): http = proxy_from_url(self.proxy_url) + self.addCleanup(http.clear) r = http.request('GET', '%s/redirect' % self.http_url, fields={'target': '%s/' % self.http_url}, @@ -119,6 +125,7 @@ def test_cross_host_redirect(self): http = proxy_from_url(self.proxy_url) + self.addCleanup(http.clear) cross_host_location = '%s/echo?a=b' % self.http_url_alt try: @@ -137,6 +144,7 @@ def test_cross_protocol_redirect(self): http = proxy_from_url(self.proxy_url) + self.addCleanup(http.clear) cross_protocol_location = '%s/echo?a=b' % self.https_url try: @@ -154,43 +162,44 @@ self.assertEqual(r._pool.host, self.https_host) def test_headers(self): - http = proxy_from_url(self.proxy_url,headers={'Foo': 'bar'}, - proxy_headers={'Hickory': 'dickory'}) + http = proxy_from_url(self.proxy_url, headers={'Foo': 'bar'}, + proxy_headers={'Hickory': 'dickory'}) + self.addCleanup(http.clear) r = http.request_encode_url('GET', '%s/headers' % self.http_url) returned_headers = json.loads(r.data.decode()) self.assertEqual(returned_headers.get('Foo'), 'bar') self.assertEqual(returned_headers.get('Hickory'), 'dickory') self.assertEqual(returned_headers.get('Host'), - '%s:%s'%(self.http_host,self.http_port)) + '%s:%s' % (self.http_host, self.http_port)) r = http.request_encode_url('GET', '%s/headers' % self.http_url_alt) returned_headers = json.loads(r.data.decode()) self.assertEqual(returned_headers.get('Foo'), 'bar') self.assertEqual(returned_headers.get('Hickory'), 'dickory') self.assertEqual(returned_headers.get('Host'), - '%s:%s'%(self.http_host_alt,self.http_port)) + '%s:%s' % (self.http_host_alt, self.http_port)) r = http.request_encode_url('GET', '%s/headers' % self.https_url) returned_headers = json.loads(r.data.decode()) self.assertEqual(returned_headers.get('Foo'), 'bar') self.assertEqual(returned_headers.get('Hickory'), None) self.assertEqual(returned_headers.get('Host'), - '%s:%s'%(self.https_host,self.https_port)) + '%s:%s' % (self.https_host, self.https_port)) r = http.request_encode_url('GET', '%s/headers' % self.https_url_alt) returned_headers = json.loads(r.data.decode()) self.assertEqual(returned_headers.get('Foo'), 'bar') self.assertEqual(returned_headers.get('Hickory'), None) self.assertEqual(returned_headers.get('Host'), - '%s:%s'%(self.https_host_alt,self.https_port)) + '%s:%s' % (self.https_host_alt, self.https_port)) r = http.request_encode_body('POST', '%s/headers' % self.http_url) returned_headers = json.loads(r.data.decode()) self.assertEqual(returned_headers.get('Foo'), 'bar') self.assertEqual(returned_headers.get('Hickory'), 'dickory') self.assertEqual(returned_headers.get('Host'), - '%s:%s'%(self.http_host,self.http_port)) + '%s:%s' % (self.http_host, self.http_port)) r = http.request_encode_url('GET', '%s/headers' % self.http_url, headers={'Baz': 'quux'}) returned_headers = json.loads(r.data.decode()) @@ -198,7 +207,7 @@ self.assertEqual(returned_headers.get('Baz'), 'quux') self.assertEqual(returned_headers.get('Hickory'), 'dickory') self.assertEqual(returned_headers.get('Host'), - '%s:%s'%(self.http_host,self.http_port)) + '%s:%s' % (self.http_host, self.http_port)) r = http.request_encode_url('GET', '%s/headers' % self.https_url, headers={'Baz': 'quux'}) returned_headers = json.loads(r.data.decode()) @@ -206,7 +215,7 @@ self.assertEqual(returned_headers.get('Baz'), 'quux') self.assertEqual(returned_headers.get('Hickory'), None) self.assertEqual(returned_headers.get('Host'), - '%s:%s'%(self.https_host,self.https_port)) + '%s:%s' % (self.https_host, self.https_port)) r = http.request_encode_body('GET', '%s/headers' % self.http_url, headers={'Baz': 'quux'}) returned_headers = json.loads(r.data.decode()) @@ -214,7 +223,7 @@ self.assertEqual(returned_headers.get('Baz'), 'quux') self.assertEqual(returned_headers.get('Hickory'), 'dickory') self.assertEqual(returned_headers.get('Host'), - '%s:%s'%(self.http_host,self.http_port)) + '%s:%s' % (self.http_host, self.http_port)) r = http.request_encode_body('GET', '%s/headers' % self.https_url, headers={'Baz': 'quux'}) returned_headers = json.loads(r.data.decode()) @@ -222,7 +231,7 @@ self.assertEqual(returned_headers.get('Baz'), 'quux') self.assertEqual(returned_headers.get('Hickory'), None) self.assertEqual(returned_headers.get('Host'), - '%s:%s'%(self.https_host,self.https_port)) + '%s:%s' % (self.https_host, self.https_port)) def test_headerdict(self): default_headers = HTTPHeaderDict(a='b') @@ -233,6 +242,7 @@ self.proxy_url, headers=default_headers, proxy_headers=proxy_headers) + self.addCleanup(http.clear) request_headers = HTTPHeaderDict(baz='quux') r = http.request('GET', '%s/headers' % self.http_url, headers=request_headers) @@ -242,58 +252,63 @@ def test_proxy_pooling(self): http = proxy_from_url(self.proxy_url) + self.addCleanup(http.clear) for x in range(2): - r = http.urlopen('GET', self.http_url) + http.urlopen('GET', self.http_url) self.assertEqual(len(http.pools), 1) for x in range(2): - r = http.urlopen('GET', self.http_url_alt) + http.urlopen('GET', self.http_url_alt) self.assertEqual(len(http.pools), 1) for x in range(2): - r = http.urlopen('GET', self.https_url) + http.urlopen('GET', self.https_url) self.assertEqual(len(http.pools), 2) for x in range(2): - r = http.urlopen('GET', self.https_url_alt) + http.urlopen('GET', self.https_url_alt) self.assertEqual(len(http.pools), 3) def test_proxy_pooling_ext(self): http = proxy_from_url(self.proxy_url) + self.addCleanup(http.clear) + hc1 = http.connection_from_url(self.http_url) hc2 = http.connection_from_host(self.http_host, self.http_port) hc3 = http.connection_from_url(self.http_url_alt) hc4 = http.connection_from_host(self.http_host_alt, self.http_port) - self.assertEqual(hc1,hc2) - self.assertEqual(hc2,hc3) - self.assertEqual(hc3,hc4) + self.assertEqual(hc1, hc2) + self.assertEqual(hc2, hc3) + self.assertEqual(hc3, hc4) sc1 = http.connection_from_url(self.https_url) sc2 = http.connection_from_host(self.https_host, - self.https_port,scheme='https') + self.https_port, scheme='https') sc3 = http.connection_from_url(self.https_url_alt) sc4 = http.connection_from_host(self.https_host_alt, - self.https_port,scheme='https') - self.assertEqual(sc1,sc2) - self.assertNotEqual(sc2,sc3) - self.assertEqual(sc3,sc4) - + self.https_port, scheme='https') + self.assertEqual(sc1, sc2) + self.assertNotEqual(sc2, sc3) + self.assertEqual(sc3, sc4) @timed(0.5) + @requires_network def test_https_proxy_timeout(self): https = proxy_from_url('https://{host}'.format(host=TARPIT_HOST)) + self.addCleanup(https.clear) try: https.request('GET', self.http_url, timeout=0.001) self.fail("Failed to raise retry error.") except MaxRetryError as e: - self.assertEqual(type(e.reason), ConnectTimeoutError) - + self.assertEqual(type(e.reason), ConnectTimeoutError) @timed(0.5) + @requires_network def test_https_proxy_pool_timeout(self): https = proxy_from_url('https://{host}'.format(host=TARPIT_HOST), timeout=0.001) + self.addCleanup(https.clear) try: https.request('GET', self.http_url) self.fail("Failed to raise retry error.") @@ -303,6 +318,7 @@ def test_scheme_host_case_insensitive(self): """Assert that upper-case schemes and hosts are normalized.""" http = proxy_from_url(self.proxy_url.upper()) + self.addCleanup(http.clear) r = http.request('GET', '%s/' % self.http_url.upper()) self.assertEqual(r.status, 200) @@ -324,6 +340,7 @@ def test_basic_ipv6_proxy(self): http = proxy_from_url(self.proxy_url) + self.addCleanup(http.clear) r = http.request('GET', '%s/' % self.http_url) self.assertEqual(r.status, 200) @@ -331,5 +348,6 @@ r = http.request('GET', '%s/' % self.https_url) self.assertEqual(r.status, 200) + if __name__ == '__main__': unittest.main() diff -Nru python-urllib3-1.19.1/test/with_dummyserver/test_socketlevel.py python-urllib3-1.21.1/test/with_dummyserver/test_socketlevel.py --- python-urllib3-1.19.1/test/with_dummyserver/test_socketlevel.py 2016-11-03 13:44:18.000000000 +0000 +++ python-urllib3-1.21.1/test/with_dummyserver/test_socketlevel.py 2017-05-02 09:08:45.000000000 +0000 @@ -6,7 +6,6 @@ from urllib3.exceptions import ( MaxRetryError, ProxyError, - ConnectTimeoutError, ReadTimeoutError, SSLError, ProtocolError, @@ -19,7 +18,7 @@ from dummyserver.testcase import SocketDummyServerTestCase, consume_socket from dummyserver.server import ( - DEFAULT_CERTS, DEFAULT_CA, get_unreachable_address) + DEFAULT_CERTS, DEFAULT_CA, COMBINED_CERT_AND_KEY, get_unreachable_address) from .. import onlyPy3, LogRecorder @@ -53,6 +52,7 @@ self._start_server(multicookie_response_handler) pool = HTTPConnectionPool(self.host, self.port) + self.addCleanup(pool.close) r = pool.request('GET', '/', retries=0) self.assertEqual(r.headers, {'set-cookie': 'foo=1, bar=1'}) self.assertEqual(r.headers.getlist('set-cookie'), ['foo=1', 'bar=1']) @@ -70,21 +70,166 @@ def socket_handler(listener): sock = listener.accept()[0] - self.buf = sock.recv(65536) # We only accept one packet + self.buf = sock.recv(65536) # We only accept one packet done_receiving.set() # let the test know it can proceed sock.close() self._start_server(socket_handler) pool = HTTPSConnectionPool(self.host, self.port) + self.addCleanup(pool.close) try: pool.request('GET', '/', retries=0) - except SSLError: # We are violating the protocol + except SSLError: # We are violating the protocol pass done_receiving.wait() self.assertTrue(self.host.encode('ascii') in self.buf, "missing hostname in SSL handshake") +class TestClientCerts(SocketDummyServerTestCase): + """ + Tests for client certificate support. + """ + def _wrap_in_ssl(self, sock): + """ + Given a single socket, wraps it in TLS. + """ + return ssl.wrap_socket( + sock, + ssl_version=ssl.PROTOCOL_SSLv23, + cert_reqs=ssl.CERT_REQUIRED, + ca_certs=DEFAULT_CA, + certfile=DEFAULT_CERTS['certfile'], + keyfile=DEFAULT_CERTS['keyfile'], + server_side=True + ) + + def test_client_certs_two_files(self): + """ + Having a client cert in a separate file to its associated key works + properly. + """ + done_receiving = Event() + client_certs = [] + + def socket_handler(listener): + sock = listener.accept()[0] + sock = self._wrap_in_ssl(sock) + + client_certs.append(sock.getpeercert()) + + data = b'' + while not data.endswith(b'\r\n\r\n'): + data += sock.recv(8192) + + sock.sendall( + b'HTTP/1.1 200 OK\r\n' + b'Server: testsocket\r\n' + b'Connection: close\r\n' + b'Content-Length: 6\r\n' + b'\r\n' + b'Valid!' + ) + + done_receiving.wait(5) + sock.close() + + self._start_server(socket_handler) + pool = HTTPSConnectionPool( + self.host, + self.port, + cert_file=DEFAULT_CERTS['certfile'], + key_file=DEFAULT_CERTS['keyfile'], + cert_reqs='REQUIRED', + ca_certs=DEFAULT_CA, + ) + self.addCleanup(pool.close) + pool.request('GET', '/', retries=0) + done_receiving.set() + + self.assertEqual(len(client_certs), 1) + + def test_client_certs_one_file(self): + """ + Having a client cert and its associated private key in just one file + works properly. + """ + done_receiving = Event() + client_certs = [] + + def socket_handler(listener): + sock = listener.accept()[0] + sock = self._wrap_in_ssl(sock) + + client_certs.append(sock.getpeercert()) + + data = b'' + while not data.endswith(b'\r\n\r\n'): + data += sock.recv(8192) + + sock.sendall( + b'HTTP/1.1 200 OK\r\n' + b'Server: testsocket\r\n' + b'Connection: close\r\n' + b'Content-Length: 6\r\n' + b'\r\n' + b'Valid!' + ) + + done_receiving.wait(5) + sock.close() + + self._start_server(socket_handler) + pool = HTTPSConnectionPool( + self.host, + self.port, + cert_file=COMBINED_CERT_AND_KEY, + cert_reqs='REQUIRED', + ca_certs=DEFAULT_CA, + ) + self.addCleanup(pool.close) + pool.request('GET', '/', retries=0) + done_receiving.set() + + self.assertEqual(len(client_certs), 1) + + def test_missing_client_certs_raises_error(self): + """ + Having client certs not be present causes an error. + """ + done_receiving = Event() + + def socket_handler(listener): + sock = listener.accept()[0] + + try: + self._wrap_in_ssl(sock) + except ssl.SSLError: + pass + + done_receiving.wait(5) + sock.close() + + self._start_server(socket_handler) + pool = HTTPSConnectionPool( + self.host, + self.port, + cert_reqs='REQUIRED', + ca_certs=DEFAULT_CA, + ) + self.addCleanup(pool.close) + try: + pool.request('GET', '/', retries=0) + except SSLError: + done_receiving.set() + else: + done_receiving.set() + self.fail( + "Expected server to reject connection due to missing client " + "certificates" + ) + + class TestSocketClosing(SocketDummyServerTestCase): def test_recovery_when_server_closes_connection(self): @@ -104,16 +249,17 @@ body = 'Response %d' % i sock.send(('HTTP/1.1 200 OK\r\n' - 'Content-Type: text/plain\r\n' - 'Content-Length: %d\r\n' - '\r\n' - '%s' % (len(body), body)).encode('utf-8')) + 'Content-Type: text/plain\r\n' + 'Content-Length: %d\r\n' + '\r\n' + '%s' % (len(body), body)).encode('utf-8')) sock.close() # simulate a server timing out, closing socket done_closing.set() # let the test know it can proceed self._start_server(socket_handler) pool = HTTPConnectionPool(self.host, self.port) + self.addCleanup(pool.close) response = pool.request('GET', '/', retries=0) self.assertEqual(response.status, 200) @@ -129,11 +275,13 @@ # Does the pool retry if there is no listener on the port? host, port = get_unreachable_address() http = HTTPConnectionPool(host, port, maxsize=3, block=True) + self.addCleanup(http.close) self.assertRaises(MaxRetryError, http.request, 'GET', '/', retries=0, release_conn=False) self.assertEqual(http.pool.qsize(), http.pool.maxsize) def test_connection_read_timeout(self): timed_out = Event() + def socket_handler(listener): sock = listener.accept()[0] while not sock.recv(65536).endswith(b'\r\n\r\n'): @@ -143,7 +291,12 @@ sock.close() self._start_server(socket_handler) - http = HTTPConnectionPool(self.host, self.port, timeout=0.001, retries=False, maxsize=3, block=True) + http = HTTPConnectionPool(self.host, self.port, + timeout=0.001, + retries=False, + maxsize=3, + block=True) + self.addCleanup(http.close) try: self.assertRaises(ReadTimeoutError, http.request, 'GET', '/', release_conn=False) @@ -152,9 +305,28 @@ self.assertEqual(http.pool.qsize(), http.pool.maxsize) + def test_read_timeout_dont_retry_method_not_in_whitelist(self): + timed_out = Event() + + def socket_handler(listener): + sock = listener.accept()[0] + sock.recv(65536) + timed_out.wait() + sock.close() + + self._start_server(socket_handler) + pool = HTTPConnectionPool(self.host, self.port, timeout=0.001, retries=True) + self.addCleanup(pool.close) + + try: + self.assertRaises(ReadTimeoutError, pool.request, 'POST', '/') + finally: + timed_out.set() + def test_https_connection_read_timeout(self): """ Handshake timeouts should fail with a Timeout""" timed_out = Event() + def socket_handler(listener): sock = listener.accept()[0] while not sock.recv(65536): @@ -165,6 +337,7 @@ self._start_server(socket_handler) pool = HTTPSConnectionPool(self.host, self.port, timeout=0.001, retries=False) + self.addCleanup(pool.close) try: self.assertRaises(ReadTimeoutError, pool.request, 'GET', '/') finally: @@ -186,10 +359,10 @@ # Now respond immediately. body = 'Response 2' sock.send(('HTTP/1.1 200 OK\r\n' - 'Content-Type: text/plain\r\n' - 'Content-Length: %d\r\n' - '\r\n' - '%s' % (len(body), body)).encode('utf-8')) + 'Content-Type: text/plain\r\n' + 'Content-Length: %d\r\n' + '\r\n' + '%s' % (len(body), body)).encode('utf-8')) sock.close() @@ -204,6 +377,7 @@ self._start_server(socket_handler) t = Timeout(connect=0.001, read=0.001) pool = HTTPConnectionPool(self.host, self.port, timeout=t) + self.addCleanup(pool.close) response = pool.request('GET', '/', retries=1) self.assertEqual(response.status, 200) @@ -231,6 +405,7 @@ self._start_server(socket_handler) pool = HTTPConnectionPool(self.host, self.port) + self.addCleanup(pool.close) response = pool.urlopen('GET', '/', retries=0, preload_content=False, timeout=Timeout(connect=1, read=0.001)) @@ -258,6 +433,7 @@ self._start_server(socket_handler) pool = HTTPConnectionPool(self.host, self.port) + self.addCleanup(pool.close) try: self.assertRaises(ReadTimeoutError, pool.urlopen, @@ -290,6 +466,7 @@ self._start_server(socket_handler) pool = HTTPConnectionPool(self.host, self.port) + self.addCleanup(pool.close) response = pool.request('GET', '/', retries=0, preload_content=False) self.assertRaises(ProtocolError, response.read) @@ -308,10 +485,10 @@ # send unknown http protocol body = "bad http 0.5 response" sock.send(('HTTP/0.5 200 OK\r\n' - 'Content-Type: text/plain\r\n' - 'Content-Length: %d\r\n' - '\r\n' - '%s' % (len(body), body)).encode('utf-8')) + 'Content-Type: text/plain\r\n' + 'Content-Length: %d\r\n' + '\r\n' + '%s' % (len(body), body)).encode('utf-8')) sock.close() # Second request. @@ -322,15 +499,16 @@ # Now respond immediately. sock.send(('HTTP/1.1 200 OK\r\n' - 'Content-Type: text/plain\r\n' - 'Content-Length: %d\r\n' - '\r\n' - 'foo' % (len('foo'))).encode('utf-8')) + 'Content-Type: text/plain\r\n' + 'Content-Length: %d\r\n' + '\r\n' + 'foo' % (len('foo'))).encode('utf-8')) sock.close() # Close the socket. self._start_server(socket_handler) pool = HTTPConnectionPool(self.host, self.port) + self.addCleanup(pool.close) retry = Retry(read=1) response = pool.request('GET', '/', retries=retry) self.assertEqual(response.status, 200) @@ -377,13 +555,11 @@ buf = sock.recv(65536) # Send partial response and close socket. - sock.send(( - 'HTTP/1.1 200 OK\r\n' - 'Content-Type: text/plain\r\n' - 'Content-Length: %d\r\n' - '\r\n' - '%s' % (len(body), partial_body)).encode('utf-8') - ) + sock.send(('HTTP/1.1 200 OK\r\n' + 'Content-Type: text/plain\r\n' + 'Content-Length: %d\r\n' + '\r\n' + '%s' % (len(body), partial_body)).encode('utf-8')) sock.close() self._start_server(socket_handler) @@ -479,9 +655,9 @@ buf = sock.recv(65536) sock.send(('HTTP/1.1 200 OK\r\n' - 'Content-Type: text/plain\r\n' - 'Content-Length: 0\r\n' - '\r\n').encode('utf-8')) + 'Content-Type: text/plain\r\n' + 'Content-Length: 0\r\n' + '\r\n').encode('utf-8')) # Wait for the socket to close. done_closing.wait(timeout=1) @@ -497,6 +673,7 @@ self._start_server(socket_handler) pool = HTTPConnectionPool(self.host, self.port) + self.addCleanup(pool.close) response = pool.request('GET', '/', retries=0, preload_content=False) self.assertEqual(response.status, 200) @@ -508,7 +685,8 @@ self.fail("Timed out waiting for connection close") def test_release_conn_param_is_respected_after_timeout_retry(self): - """For successful ```urlopen(release_conn=False)```, the connection isn't released, even after a retry. + """For successful ```urlopen(release_conn=False)```, + the connection isn't released, even after a retry. This test allows a retry: one request fails, the next request succeeds. @@ -579,15 +757,16 @@ buf += sock.recv(65536) sock.send(('HTTP/1.1 200 OK\r\n' - 'Content-Type: text/plain\r\n' - 'Content-Length: %d\r\n' - '\r\n' - '%s' % (len(buf), buf.decode('utf-8'))).encode('utf-8')) + 'Content-Type: text/plain\r\n' + 'Content-Length: %d\r\n' + '\r\n' + '%s' % (len(buf), buf.decode('utf-8'))).encode('utf-8')) sock.close() self._start_server(echo_socket_handler) base_url = 'http://%s:%d' % (self.host, self.port) proxy = proxy_from_url(base_url) + self.addCleanup(proxy.clear) r = proxy.request('GET', 'http://google.com/') @@ -614,10 +793,10 @@ buf += sock.recv(65536) sock.send(('HTTP/1.1 200 OK\r\n' - 'Content-Type: text/plain\r\n' - 'Content-Length: %d\r\n' - '\r\n' - '%s' % (len(buf), buf.decode('utf-8'))).encode('utf-8')) + 'Content-Type: text/plain\r\n' + 'Content-Length: %d\r\n' + '\r\n' + '%s' % (len(buf), buf.decode('utf-8'))).encode('utf-8')) sock.close() self._start_server(echo_socket_handler) @@ -626,6 +805,7 @@ # Define some proxy headers. proxy_headers = HTTPHeaderDict({'For The Proxy': 'YEAH!'}) proxy = proxy_from_url(base_url, proxy_headers=proxy_headers) + self.addCleanup(proxy.clear) conn = proxy.connection_from_url('http://www.google.com/') @@ -653,10 +833,10 @@ buf += sock.recv(65536) sock.send(('HTTP/1.1 200 OK\r\n' - 'Content-Type: text/plain\r\n' - 'Content-Length: %d\r\n' - '\r\n' - '%s' % (len(buf), buf.decode('utf-8'))).encode('utf-8')) + 'Content-Type: text/plain\r\n' + 'Content-Length: %d\r\n' + '\r\n' + '%s' % (len(buf), buf.decode('utf-8'))).encode('utf-8')) sock.close() close_event.set() @@ -664,6 +844,7 @@ base_url = 'http://%s:%d' % (self.host, self.port) proxy = proxy_from_url(base_url) + self.addCleanup(proxy.clear) conn = proxy.connection_from_url('http://www.google.com') r = conn.urlopen('GET', 'http://www.google.com', @@ -672,8 +853,8 @@ close_event.wait(timeout=1) self.assertRaises(ProxyError, conn.urlopen, 'GET', - 'http://www.google.com', - assert_same_host=False, retries=False) + 'http://www.google.com', + assert_same_host=False, retries=False) def test_connect_reconn(self): def proxy_ssl_one(listener): @@ -712,6 +893,7 @@ '\r\n' 'Hi').encode('utf-8')) ssl_sock.close() + def echo_socket_handler(listener): proxy_ssl_one(listener) proxy_ssl_one(listener) @@ -720,6 +902,7 @@ base_url = 'http://%s:%d' % (self.host, self.port) proxy = proxy_from_url(base_url) + self.addCleanup(proxy.clear) url = 'https://{0}'.format(self.host) conn = proxy.connection_from_url(url) @@ -757,6 +940,7 @@ self._start_server(socket_handler) pool = HTTPSConnectionPool(self.host, self.port) + self.addCleanup(pool.close) self.assertRaises(SSLError, pool.request, 'GET', '/', retries=0) @@ -789,6 +973,7 @@ self._start_server(socket_handler) pool = HTTPSConnectionPool(self.host, self.port) + self.addCleanup(pool.close) response = pool.urlopen('GET', '/', retries=0, preload_content=False, timeout=Timeout(connect=1, read=0.001)) @@ -821,9 +1006,9 @@ ':9A:8C:B6:07:CA:58:EE:74:5E') def request(): + pool = HTTPSConnectionPool(self.host, self.port, + assert_fingerprint=fingerprint) try: - pool = HTTPSConnectionPool(self.host, self.port, - assert_fingerprint=fingerprint) response = pool.urlopen('GET', '/', preload_content=False, timeout=Timeout(connect=1, read=0.001)) response.read() @@ -844,6 +1029,7 @@ b'\r\n' ) pool = HTTPConnectionPool(self.host, self.port, retries=False) + self.addCleanup(pool.close) self.assertRaises(ProtocolError, pool.request, 'GET', '/') def test_unknown_protocol(self): @@ -853,10 +1039,11 @@ b'\r\n' ) pool = HTTPConnectionPool(self.host, self.port, retries=False) + self.addCleanup(pool.close) self.assertRaises(ProtocolError, pool.request, 'GET', '/') -class TestHeaders(SocketDummyServerTestCase): +class TestHeaders(SocketDummyServerTestCase): @onlyPy3 def test_httplib_headers_case_insensitive(self): self.start_response_handler( @@ -866,9 +1053,10 @@ b'\r\n' ) pool = HTTPConnectionPool(self.host, self.port, retries=False) + self.addCleanup(pool.close) HEADERS = {'Content-Length': '0', 'Content-type': 'text/plain'} r = pool.request('GET', '/') - self.assertEqual(HEADERS, dict(r.headers.items())) # to preserve case sensitivity + self.assertEqual(HEADERS, dict(r.headers.items())) # to preserve case sensitivity def test_headers_are_sent_with_the_original_case(self): headers = {'foo': 'bar', 'bAz': 'quux'} @@ -900,9 +1088,10 @@ expected_headers.update(headers) pool = HTTPConnectionPool(self.host, self.port, retries=False) + self.addCleanup(pool.close) pool.request('GET', '/', headers=HTTPHeaderDict(headers)) self.assertEqual(expected_headers, parsed_headers) - + def test_request_headers_are_sent_in_the_original_order(self): # NOTE: Probability this test gives a false negative is 1/(K!) K = 16 @@ -910,7 +1099,7 @@ # so that if the internal implementation tries to sort them, # a change will be detected. expected_request_headers = [(u'X-Header-%d' % i, str(i)) for i in reversed(range(K))] - + actual_request_headers = [] def socket_handler(listener): @@ -938,9 +1127,10 @@ self._start_server(socket_handler) pool = HTTPConnectionPool(self.host, self.port, retries=False) + self.addCleanup(pool.close) pool.request('GET', '/', headers=OrderedDict(expected_request_headers)) self.assertEqual(expected_request_headers, actual_request_headers) - + def test_response_headers_are_returned_in_the_original_order(self): # NOTE: Probability this test gives a false negative is 1/(K!) K = 16 @@ -948,7 +1138,7 @@ # so that if the internal implementation tries to sort them, # a change will be detected. expected_response_headers = [('X-Header-%d' % i, str(i)) for i in reversed(range(K))] - + def socket_handler(listener): sock = listener.accept()[0] @@ -966,6 +1156,7 @@ self._start_server(socket_handler) pool = HTTPConnectionPool(self.host, self.port) + self.addCleanup(pool.close) r = pool.request('GET', '/', retries=0) actual_response_headers = [ (k, v) for (k, v) in r.headers.items() @@ -990,6 +1181,7 @@ ) pool = HTTPConnectionPool(self.host, self.port, retries=False) + self.addCleanup(pool.close) with LogRecorder() as logs: pool.request('GET', '/') @@ -1028,6 +1220,7 @@ b'\r\n' ) pool = HTTPConnectionPool(self.host, self.port, retries=False) + self.addCleanup(pool.close) r = pool.request('HEAD', '/', timeout=1, preload_content=False) # stream will use the read_chunked method here. @@ -1041,6 +1234,7 @@ b'\r\n' ) pool = HTTPConnectionPool(self.host, self.port, retries=False) + self.addCleanup(pool.close) r = pool.request('HEAD', '/', timeout=1, preload_content=False) # stream will use the read method here. @@ -1070,6 +1264,7 @@ self._start_server(socket_handler) pool = HTTPConnectionPool(self.host, self.port, retries=False) + self.addCleanup(pool.close) r = pool.request('GET', '/', timeout=1, preload_content=False) # Stream should read to the end. @@ -1077,6 +1272,7 @@ done_event.set() + class TestBadContentLength(SocketDummyServerTestCase): def test_enforce_content_length_get(self): done_event = Event() @@ -1100,6 +1296,7 @@ self._start_server(socket_handler) conn = HTTPConnectionPool(self.host, self.port, maxsize=1) + self.addCleanup(conn.close) # Test stream read when content length less than headers claim get_response = conn.request('GET', url='/', preload_content=False, @@ -1137,8 +1334,9 @@ self._start_server(socket_handler) conn = HTTPConnectionPool(self.host, self.port, maxsize=1) + self.addCleanup(conn.close) - #Test stream on 0 length body + # Test stream on 0 length body head_response = conn.request('HEAD', url='/', preload_content=False, enforce_content_length=True) data = [chunk for chunk in head_response.stream(1)] diff -Nru python-urllib3-1.19.1/urllib3/_collections.py python-urllib3-1.21.1/urllib3/_collections.py --- python-urllib3-1.19.1/urllib3/_collections.py 2016-09-12 09:02:32.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/_collections.py 2017-04-25 11:10:19.000000000 +0000 @@ -144,7 +144,7 @@ self.extend(kwargs) def __setitem__(self, key, val): - self._container[key.lower()] = (key, val) + self._container[key.lower()] = [key, val] return self._container[key.lower()] def __getitem__(self, key): @@ -215,18 +215,11 @@ 'bar, baz' """ key_lower = key.lower() - new_vals = key, val + new_vals = [key, val] # Keep the common case aka no item present as fast as possible vals = self._container.setdefault(key_lower, new_vals) if new_vals is not vals: - # new_vals was not inserted, as there was a previous one - if isinstance(vals, list): - # If already several items got inserted, we have a list - vals.append(val) - else: - # vals should be a tuple then, i.e. only one item so far - # Need to convert the tuple to list for further extension - self._container[key_lower] = [vals[0], vals[1], val] + vals.append(val) def extend(self, *args, **kwargs): """Generic import function for any type of header-like object. @@ -262,10 +255,7 @@ except KeyError: return [] else: - if isinstance(vals, tuple): - return [vals[1]] - else: - return vals[1:] + return vals[1:] # Backwards compatibility for httplib getheaders = getlist diff -Nru python-urllib3-1.19.1/urllib3/connectionpool.py python-urllib3-1.21.1/urllib3/connectionpool.py --- python-urllib3-1.19.1/urllib3/connectionpool.py 2016-11-16 09:43:02.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/connectionpool.py 2017-05-02 09:08:45.000000000 +0000 @@ -25,7 +25,7 @@ ) from .packages.ssl_match_hostname import CertificateError from .packages import six -from .packages.six.moves.queue import LifoQueue, Empty, Full +from .packages.six.moves import queue from .connection import ( port_by_scheme, DummyConnection, @@ -36,6 +36,7 @@ from .response import HTTPResponse from .util.connection import is_connection_dropped +from .util.request import set_file_position from .util.response import assert_header_parsing from .util.retry import Retry from .util.timeout import Timeout @@ -61,19 +62,13 @@ """ scheme = None - QueueCls = LifoQueue + QueueCls = queue.LifoQueue def __init__(self, host, port=None): if not host: raise LocationValueError("No host specified.") - # httplib doesn't like it when we include brackets in ipv6 addresses - # Specifically, if we include brackets but also pass the port then - # httplib crazily doubles up the square brackets on the Host header. - # Instead, we need to make sure we never pass ``None`` as the port. - # However, for backward compatibility reasons we can't actually - # *assert* that. - self.host = host.strip('[]') + self.host = _ipv6_host(host).lower() self.port = port def __str__(self): @@ -154,7 +149,7 @@ A dictionary with proxy headers, should not be used directly, instead, see :class:`urllib3.connectionpool.ProxyManager`" - :param \**conn_kw: + :param \\**conn_kw: Additional parameters are used to create fresh :class:`urllib3.connection.HTTPConnection`, :class:`urllib3.connection.HTTPSConnection` instances. """ @@ -235,7 +230,7 @@ except AttributeError: # self.pool is None raise ClosedPoolError(self, "Pool is closed.") - except Empty: + except queue.Empty: if self.block: raise EmptyPoolError(self, "Pool reached maximum size and no more " @@ -274,7 +269,7 @@ except AttributeError: # self.pool is None. pass - except Full: + except queue.Full: # This should never happen if self.block == True log.warning( "Connection pool is full, discarding connection: %s", @@ -424,7 +419,7 @@ if conn: conn.close() - except Empty: + except queue.Empty: pass # Done. def is_same_host(self, url): @@ -438,6 +433,8 @@ # TODO: Add optional support for socket.gethostbyname checking. scheme, host, port = get_host(url) + host = _ipv6_host(host).lower() + # Use explicit default port for comparison when none is given if self.port and not port: port = port_by_scheme.get(scheme) @@ -449,7 +446,7 @@ def urlopen(self, method, url, body=None, headers=None, retries=None, redirect=True, assert_same_host=True, timeout=_Default, pool_timeout=None, release_conn=None, chunked=False, - **response_kw): + body_pos=None, **response_kw): """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all @@ -531,7 +528,12 @@ encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. - :param \**response_kw: + :param int body_pos: + Position to seek to in file-like body in the event of a retry or + redirect. Typically this won't need to be set because urllib3 will + auto-populate the value when needed. + + :param \\**response_kw: Additional parameters are passed to :meth:`urllib3.response.HTTPResponse.from_httplib` """ @@ -576,6 +578,10 @@ # ensures we do proper cleanup in finally. clean_exit = False + # Rewind body position, if needed. Record current position + # for future rewinds in the event of a redirect/retry. + body_pos = set_file_position(body, body_pos) + try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) @@ -612,7 +618,7 @@ # Everything went great! clean_exit = True - except Empty: + except queue.Empty: # Timed out by queue. raise EmptyPoolError(self, "No pool connections are available.") @@ -668,7 +674,8 @@ return self.urlopen(method, url, body, headers, retries, redirect, assert_same_host, timeout=timeout, pool_timeout=pool_timeout, - release_conn=release_conn, **response_kw) + release_conn=release_conn, body_pos=body_pos, + **response_kw) # Handle redirect? redirect_location = redirect and response.get_redirect_location() @@ -693,7 +700,8 @@ retries=retries, redirect=redirect, assert_same_host=assert_same_host, timeout=timeout, pool_timeout=pool_timeout, - release_conn=release_conn, **response_kw) + release_conn=release_conn, body_pos=body_pos, + **response_kw) # Check if we should retry the HTTP response. has_retry_after = bool(response.getheader('Retry-After')) @@ -714,7 +722,8 @@ retries=retries, redirect=redirect, assert_same_host=assert_same_host, timeout=timeout, pool_timeout=pool_timeout, - release_conn=release_conn, **response_kw) + release_conn=release_conn, + body_pos=body_pos, **response_kw) return response @@ -853,7 +862,7 @@ :param url: Absolute URL string that must include the scheme. Port is optional. - :param \**kw: + :param \\**kw: Passes additional parameters to the constructor of the appropriate :class:`.ConnectionPool`. Useful for specifying things like timeout, maxsize, headers, etc. @@ -869,3 +878,22 @@ return HTTPSConnectionPool(host, port=port, **kw) else: return HTTPConnectionPool(host, port=port, **kw) + + +def _ipv6_host(host): + """ + Process IPv6 address literals + """ + + # httplib doesn't like it when we include brackets in IPv6 addresses + # Specifically, if we include brackets but also pass the port then + # httplib crazily doubles up the square brackets on the Host header. + # Instead, we need to make sure we never pass ``None`` as the port. + # However, for backward compatibility reasons we can't actually + # *assert* that. See http://bugs.python.org/issue28539 + # + # Also if an IPv6 address literal has a zone identifier, the + # percent sign might be URIencoded, convert it back into ASCII + if host.startswith('[') and host.endswith(']'): + host = host.replace('%25', '%').strip('[]') + return host diff -Nru python-urllib3-1.19.1/urllib3/connection.py python-urllib3-1.21.1/urllib3/connection.py --- python-urllib3-1.19.1/urllib3/connection.py 2016-11-16 09:43:02.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/connection.py 2017-05-02 09:08:45.000000000 +0000 @@ -56,7 +56,10 @@ 'https': 443, } -RECENT_DATE = datetime.date(2014, 1, 1) +# When updating RECENT_DATE, move it to +# within two years of the current date, and no +# earlier than 6 months ago. +RECENT_DATE = datetime.date(2016, 1, 1) class DummyConnection(object): @@ -326,7 +329,11 @@ assert_fingerprint(self.sock.getpeercert(binary_form=True), self.assert_fingerprint) elif context.verify_mode != ssl.CERT_NONE \ + and not getattr(context, 'check_hostname', False) \ and self.assert_hostname is not False: + # While urllib3 attempts to always turn off hostname matching from + # the TLS library, this cannot always be done. So we check whether + # the TLS Library still thinks it's matching hostnames. cert = self.sock.getpeercert() if not cert.get('subjectAltName', ()): warnings.warn(( diff -Nru python-urllib3-1.19.1/urllib3/contrib/appengine.py python-urllib3-1.21.1/urllib3/contrib/appengine.py --- python-urllib3-1.19.1/urllib3/contrib/appengine.py 2016-11-16 09:47:17.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/contrib/appengine.py 2017-01-19 09:51:46.000000000 +0000 @@ -111,7 +111,7 @@ warnings.warn( "urllib3 is using URLFetch on Google App Engine sandbox instead " "of sockets. To use sockets directly instead of URLFetch see " - "https://urllib3.readthedocs.io/en/latest/contrib.html.", + "https://urllib3.readthedocs.io/en/latest/reference/urllib3.contrib.html.", AppEnginePlatformWarning) RequestMethods.__init__(self, headers) diff -Nru python-urllib3-1.19.1/urllib3/contrib/pyopenssl.py python-urllib3-1.21.1/urllib3/contrib/pyopenssl.py --- python-urllib3-1.19.1/urllib3/contrib/pyopenssl.py 2016-11-16 09:43:02.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/contrib/pyopenssl.py 2017-05-02 09:08:45.000000000 +0000 @@ -43,7 +43,6 @@ """ from __future__ import absolute_import -import idna import OpenSSL.SSL from cryptography import x509 from cryptography.hazmat.backends.openssl import backend as openssl_backend @@ -60,7 +59,6 @@ import logging import ssl -import select import six import sys @@ -111,6 +109,8 @@ def inject_into_urllib3(): 'Monkey-patch urllib3 with PyOpenSSL-backed SSL-support.' + _validate_dependencies_met() + util.ssl_.SSLContext = PyOpenSSLContext util.HAS_SNI = HAS_SNI util.ssl_.HAS_SNI = HAS_SNI @@ -128,6 +128,26 @@ util.ssl_.IS_PYOPENSSL = False +def _validate_dependencies_met(): + """ + Verifies that PyOpenSSL's package-level dependencies have been met. + Throws `ImportError` if they are not met. + """ + # Method added in `cryptography==1.1`; not available in older versions + from cryptography.x509.extensions import Extensions + if getattr(Extensions, "get_extension_for_class", None) is None: + raise ImportError("'cryptography' module missing required functionality. " + "Try upgrading to v1.3.4 or newer.") + + # pyOpenSSL 0.14 and above use cryptography for OpenSSL bindings. The _x509 + # attribute is only present on those versions. + from OpenSSL.crypto import X509 + x509 = X509() + if getattr(x509, "_x509", None) is None: + raise ImportError("'pyOpenSSL' module missing required functionality. " + "Try upgrading to v0.14 or newer.") + + def _dnsname_to_stdlib(name): """ Converts a dNSName SubjectAlternativeName field to the form used by the @@ -144,6 +164,8 @@ that we can't just safely call `idna.encode`: it can explode for wildcard names. This avoids that problem. """ + import idna + for prefix in [u'*.', u'.']: if name.startswith(prefix): name = name[len(prefix):] @@ -242,8 +264,7 @@ else: raise except OpenSSL.SSL.WantReadError: - rd, wd, ed = select.select( - [self.socket], [], [], self.socket.gettimeout()) + rd = util.wait_for_read(self.socket, self.socket.gettimeout()) if not rd: raise timeout('The read operation timed out') else: @@ -265,8 +286,7 @@ else: raise except OpenSSL.SSL.WantReadError: - rd, wd, ed = select.select( - [self.socket], [], [], self.socket.gettimeout()) + rd = util.wait_for_read(self.socket, self.socket.gettimeout()) if not rd: raise timeout('The read operation timed out') else: @@ -280,11 +300,12 @@ try: return self.connection.send(data) except OpenSSL.SSL.WantWriteError: - _, wlist, _ = select.select([], [self.socket], [], - self.socket.gettimeout()) - if not wlist: + wr = util.wait_for_write(self.socket, self.socket.gettimeout()) + if not wr: raise timeout() continue + except OpenSSL.SSL.SysCallError as e: + raise SocketError(str(e)) def sendall(self, data): total_sent = 0 @@ -416,7 +437,7 @@ try: cnx.do_handshake() except OpenSSL.SSL.WantReadError: - rd, _, _ = select.select([sock], [], [], sock.gettimeout()) + rd = util.wait_for_read(sock, sock.gettimeout()) if not rd: raise timeout('select timed out') continue diff -Nru python-urllib3-1.19.1/urllib3/contrib/_securetransport/bindings.py python-urllib3-1.21.1/urllib3/contrib/_securetransport/bindings.py --- python-urllib3-1.19.1/urllib3/contrib/_securetransport/bindings.py 1970-01-01 00:00:00.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/contrib/_securetransport/bindings.py 2017-04-25 11:28:36.000000000 +0000 @@ -0,0 +1,590 @@ +""" +This module uses ctypes to bind a whole bunch of functions and constants from +SecureTransport. The goal here is to provide the low-level API to +SecureTransport. These are essentially the C-level functions and constants, and +they're pretty gross to work with. + +This code is a bastardised version of the code found in Will Bond's oscrypto +library. An enormous debt is owed to him for blazing this trail for us. For +that reason, this code should be considered to be covered both by urllib3's +license and by oscrypto's: + + Copyright (c) 2015-2016 Will Bond + + Permission is hereby granted, free of charge, to any person obtaining a + copy of this software and associated documentation files (the "Software"), + to deal in the Software without restriction, including without limitation + the rights to use, copy, modify, merge, publish, distribute, sublicense, + and/or sell copies of the Software, and to permit persons to whom the + Software is furnished to do so, subject to the following conditions: + + The above copyright notice and this permission notice shall be included in + all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. +""" +from __future__ import absolute_import + +import platform +from ctypes.util import find_library +from ctypes import ( + c_void_p, c_int32, c_char_p, c_size_t, c_byte, c_uint32, c_ulong, c_long, + c_bool +) +from ctypes import CDLL, POINTER, CFUNCTYPE + + +security_path = find_library('Security') +if not security_path: + raise ImportError('The library Security could not be found') + + +core_foundation_path = find_library('CoreFoundation') +if not core_foundation_path: + raise ImportError('The library CoreFoundation could not be found') + + +version = platform.mac_ver()[0] +version_info = tuple(map(int, version.split('.'))) +if version_info < (10, 8): + raise OSError( + 'Only OS X 10.8 and newer are supported, not %s.%s' % ( + version_info[0], version_info[1] + ) + ) + +Security = CDLL(security_path, use_errno=True) +CoreFoundation = CDLL(core_foundation_path, use_errno=True) + +Boolean = c_bool +CFIndex = c_long +CFStringEncoding = c_uint32 +CFData = c_void_p +CFString = c_void_p +CFArray = c_void_p +CFMutableArray = c_void_p +CFDictionary = c_void_p +CFError = c_void_p +CFType = c_void_p +CFTypeID = c_ulong + +CFTypeRef = POINTER(CFType) +CFAllocatorRef = c_void_p + +OSStatus = c_int32 + +CFDataRef = POINTER(CFData) +CFStringRef = POINTER(CFString) +CFArrayRef = POINTER(CFArray) +CFMutableArrayRef = POINTER(CFMutableArray) +CFDictionaryRef = POINTER(CFDictionary) +CFArrayCallBacks = c_void_p +CFDictionaryKeyCallBacks = c_void_p +CFDictionaryValueCallBacks = c_void_p + +SecCertificateRef = POINTER(c_void_p) +SecExternalFormat = c_uint32 +SecExternalItemType = c_uint32 +SecIdentityRef = POINTER(c_void_p) +SecItemImportExportFlags = c_uint32 +SecItemImportExportKeyParameters = c_void_p +SecKeychainRef = POINTER(c_void_p) +SSLProtocol = c_uint32 +SSLCipherSuite = c_uint32 +SSLContextRef = POINTER(c_void_p) +SecTrustRef = POINTER(c_void_p) +SSLConnectionRef = c_uint32 +SecTrustResultType = c_uint32 +SecTrustOptionFlags = c_uint32 +SSLProtocolSide = c_uint32 +SSLConnectionType = c_uint32 +SSLSessionOption = c_uint32 + + +try: + Security.SecItemImport.argtypes = [ + CFDataRef, + CFStringRef, + POINTER(SecExternalFormat), + POINTER(SecExternalItemType), + SecItemImportExportFlags, + POINTER(SecItemImportExportKeyParameters), + SecKeychainRef, + POINTER(CFArrayRef), + ] + Security.SecItemImport.restype = OSStatus + + Security.SecCertificateGetTypeID.argtypes = [] + Security.SecCertificateGetTypeID.restype = CFTypeID + + Security.SecIdentityGetTypeID.argtypes = [] + Security.SecIdentityGetTypeID.restype = CFTypeID + + Security.SecKeyGetTypeID.argtypes = [] + Security.SecKeyGetTypeID.restype = CFTypeID + + Security.SecCertificateCreateWithData.argtypes = [ + CFAllocatorRef, + CFDataRef + ] + Security.SecCertificateCreateWithData.restype = SecCertificateRef + + Security.SecCertificateCopyData.argtypes = [ + SecCertificateRef + ] + Security.SecCertificateCopyData.restype = CFDataRef + + Security.SecCopyErrorMessageString.argtypes = [ + OSStatus, + c_void_p + ] + Security.SecCopyErrorMessageString.restype = CFStringRef + + Security.SecIdentityCreateWithCertificate.argtypes = [ + CFTypeRef, + SecCertificateRef, + POINTER(SecIdentityRef) + ] + Security.SecIdentityCreateWithCertificate.restype = OSStatus + + Security.SecKeychainCreate.argtypes = [ + c_char_p, + c_uint32, + c_void_p, + Boolean, + c_void_p, + POINTER(SecKeychainRef) + ] + Security.SecKeychainCreate.restype = OSStatus + + Security.SecKeychainDelete.argtypes = [ + SecKeychainRef + ] + Security.SecKeychainDelete.restype = OSStatus + + Security.SecPKCS12Import.argtypes = [ + CFDataRef, + CFDictionaryRef, + POINTER(CFArrayRef) + ] + Security.SecPKCS12Import.restype = OSStatus + + SSLReadFunc = CFUNCTYPE(OSStatus, SSLConnectionRef, c_void_p, POINTER(c_size_t)) + SSLWriteFunc = CFUNCTYPE(OSStatus, SSLConnectionRef, POINTER(c_byte), POINTER(c_size_t)) + + Security.SSLSetIOFuncs.argtypes = [ + SSLContextRef, + SSLReadFunc, + SSLWriteFunc + ] + Security.SSLSetIOFuncs.restype = OSStatus + + Security.SSLSetPeerID.argtypes = [ + SSLContextRef, + c_char_p, + c_size_t + ] + Security.SSLSetPeerID.restype = OSStatus + + Security.SSLSetCertificate.argtypes = [ + SSLContextRef, + CFArrayRef + ] + Security.SSLSetCertificate.restype = OSStatus + + Security.SSLSetCertificateAuthorities.argtypes = [ + SSLContextRef, + CFTypeRef, + Boolean + ] + Security.SSLSetCertificateAuthorities.restype = OSStatus + + Security.SSLSetConnection.argtypes = [ + SSLContextRef, + SSLConnectionRef + ] + Security.SSLSetConnection.restype = OSStatus + + Security.SSLSetPeerDomainName.argtypes = [ + SSLContextRef, + c_char_p, + c_size_t + ] + Security.SSLSetPeerDomainName.restype = OSStatus + + Security.SSLHandshake.argtypes = [ + SSLContextRef + ] + Security.SSLHandshake.restype = OSStatus + + Security.SSLRead.argtypes = [ + SSLContextRef, + c_char_p, + c_size_t, + POINTER(c_size_t) + ] + Security.SSLRead.restype = OSStatus + + Security.SSLWrite.argtypes = [ + SSLContextRef, + c_char_p, + c_size_t, + POINTER(c_size_t) + ] + Security.SSLWrite.restype = OSStatus + + Security.SSLClose.argtypes = [ + SSLContextRef + ] + Security.SSLClose.restype = OSStatus + + Security.SSLGetNumberSupportedCiphers.argtypes = [ + SSLContextRef, + POINTER(c_size_t) + ] + Security.SSLGetNumberSupportedCiphers.restype = OSStatus + + Security.SSLGetSupportedCiphers.argtypes = [ + SSLContextRef, + POINTER(SSLCipherSuite), + POINTER(c_size_t) + ] + Security.SSLGetSupportedCiphers.restype = OSStatus + + Security.SSLSetEnabledCiphers.argtypes = [ + SSLContextRef, + POINTER(SSLCipherSuite), + c_size_t + ] + Security.SSLSetEnabledCiphers.restype = OSStatus + + Security.SSLGetNumberEnabledCiphers.argtype = [ + SSLContextRef, + POINTER(c_size_t) + ] + Security.SSLGetNumberEnabledCiphers.restype = OSStatus + + Security.SSLGetEnabledCiphers.argtypes = [ + SSLContextRef, + POINTER(SSLCipherSuite), + POINTER(c_size_t) + ] + Security.SSLGetEnabledCiphers.restype = OSStatus + + Security.SSLGetNegotiatedCipher.argtypes = [ + SSLContextRef, + POINTER(SSLCipherSuite) + ] + Security.SSLGetNegotiatedCipher.restype = OSStatus + + Security.SSLGetNegotiatedProtocolVersion.argtypes = [ + SSLContextRef, + POINTER(SSLProtocol) + ] + Security.SSLGetNegotiatedProtocolVersion.restype = OSStatus + + Security.SSLCopyPeerTrust.argtypes = [ + SSLContextRef, + POINTER(SecTrustRef) + ] + Security.SSLCopyPeerTrust.restype = OSStatus + + Security.SecTrustSetAnchorCertificates.argtypes = [ + SecTrustRef, + CFArrayRef + ] + Security.SecTrustSetAnchorCertificates.restype = OSStatus + + Security.SecTrustSetAnchorCertificatesOnly.argstypes = [ + SecTrustRef, + Boolean + ] + Security.SecTrustSetAnchorCertificatesOnly.restype = OSStatus + + Security.SecTrustEvaluate.argtypes = [ + SecTrustRef, + POINTER(SecTrustResultType) + ] + Security.SecTrustEvaluate.restype = OSStatus + + Security.SecTrustGetCertificateCount.argtypes = [ + SecTrustRef + ] + Security.SecTrustGetCertificateCount.restype = CFIndex + + Security.SecTrustGetCertificateAtIndex.argtypes = [ + SecTrustRef, + CFIndex + ] + Security.SecTrustGetCertificateAtIndex.restype = SecCertificateRef + + Security.SSLCreateContext.argtypes = [ + CFAllocatorRef, + SSLProtocolSide, + SSLConnectionType + ] + Security.SSLCreateContext.restype = SSLContextRef + + Security.SSLSetSessionOption.argtypes = [ + SSLContextRef, + SSLSessionOption, + Boolean + ] + Security.SSLSetSessionOption.restype = OSStatus + + Security.SSLSetProtocolVersionMin.argtypes = [ + SSLContextRef, + SSLProtocol + ] + Security.SSLSetProtocolVersionMin.restype = OSStatus + + Security.SSLSetProtocolVersionMax.argtypes = [ + SSLContextRef, + SSLProtocol + ] + Security.SSLSetProtocolVersionMax.restype = OSStatus + + Security.SecCopyErrorMessageString.argtypes = [ + OSStatus, + c_void_p + ] + Security.SecCopyErrorMessageString.restype = CFStringRef + + Security.SSLReadFunc = SSLReadFunc + Security.SSLWriteFunc = SSLWriteFunc + Security.SSLContextRef = SSLContextRef + Security.SSLProtocol = SSLProtocol + Security.SSLCipherSuite = SSLCipherSuite + Security.SecIdentityRef = SecIdentityRef + Security.SecKeychainRef = SecKeychainRef + Security.SecTrustRef = SecTrustRef + Security.SecTrustResultType = SecTrustResultType + Security.SecExternalFormat = SecExternalFormat + Security.OSStatus = OSStatus + + Security.kSecImportExportPassphrase = CFStringRef.in_dll( + Security, 'kSecImportExportPassphrase' + ) + Security.kSecImportItemIdentity = CFStringRef.in_dll( + Security, 'kSecImportItemIdentity' + ) + + # CoreFoundation time! + CoreFoundation.CFRetain.argtypes = [ + CFTypeRef + ] + CoreFoundation.CFRetain.restype = CFTypeRef + + CoreFoundation.CFRelease.argtypes = [ + CFTypeRef + ] + CoreFoundation.CFRelease.restype = None + + CoreFoundation.CFGetTypeID.argtypes = [ + CFTypeRef + ] + CoreFoundation.CFGetTypeID.restype = CFTypeID + + CoreFoundation.CFStringCreateWithCString.argtypes = [ + CFAllocatorRef, + c_char_p, + CFStringEncoding + ] + CoreFoundation.CFStringCreateWithCString.restype = CFStringRef + + CoreFoundation.CFStringGetCStringPtr.argtypes = [ + CFStringRef, + CFStringEncoding + ] + CoreFoundation.CFStringGetCStringPtr.restype = c_char_p + + CoreFoundation.CFStringGetCString.argtypes = [ + CFStringRef, + c_char_p, + CFIndex, + CFStringEncoding + ] + CoreFoundation.CFStringGetCString.restype = c_bool + + CoreFoundation.CFDataCreate.argtypes = [ + CFAllocatorRef, + c_char_p, + CFIndex + ] + CoreFoundation.CFDataCreate.restype = CFDataRef + + CoreFoundation.CFDataGetLength.argtypes = [ + CFDataRef + ] + CoreFoundation.CFDataGetLength.restype = CFIndex + + CoreFoundation.CFDataGetBytePtr.argtypes = [ + CFDataRef + ] + CoreFoundation.CFDataGetBytePtr.restype = c_void_p + + CoreFoundation.CFDictionaryCreate.argtypes = [ + CFAllocatorRef, + POINTER(CFTypeRef), + POINTER(CFTypeRef), + CFIndex, + CFDictionaryKeyCallBacks, + CFDictionaryValueCallBacks + ] + CoreFoundation.CFDictionaryCreate.restype = CFDictionaryRef + + CoreFoundation.CFDictionaryGetValue.argtypes = [ + CFDictionaryRef, + CFTypeRef + ] + CoreFoundation.CFDictionaryGetValue.restype = CFTypeRef + + CoreFoundation.CFArrayCreate.argtypes = [ + CFAllocatorRef, + POINTER(CFTypeRef), + CFIndex, + CFArrayCallBacks, + ] + CoreFoundation.CFArrayCreate.restype = CFArrayRef + + CoreFoundation.CFArrayCreateMutable.argtypes = [ + CFAllocatorRef, + CFIndex, + CFArrayCallBacks + ] + CoreFoundation.CFArrayCreateMutable.restype = CFMutableArrayRef + + CoreFoundation.CFArrayAppendValue.argtypes = [ + CFMutableArrayRef, + c_void_p + ] + CoreFoundation.CFArrayAppendValue.restype = None + + CoreFoundation.CFArrayGetCount.argtypes = [ + CFArrayRef + ] + CoreFoundation.CFArrayGetCount.restype = CFIndex + + CoreFoundation.CFArrayGetValueAtIndex.argtypes = [ + CFArrayRef, + CFIndex + ] + CoreFoundation.CFArrayGetValueAtIndex.restype = c_void_p + + CoreFoundation.kCFAllocatorDefault = CFAllocatorRef.in_dll( + CoreFoundation, 'kCFAllocatorDefault' + ) + CoreFoundation.kCFTypeArrayCallBacks = c_void_p.in_dll(CoreFoundation, 'kCFTypeArrayCallBacks') + CoreFoundation.kCFTypeDictionaryKeyCallBacks = c_void_p.in_dll( + CoreFoundation, 'kCFTypeDictionaryKeyCallBacks' + ) + CoreFoundation.kCFTypeDictionaryValueCallBacks = c_void_p.in_dll( + CoreFoundation, 'kCFTypeDictionaryValueCallBacks' + ) + + CoreFoundation.CFTypeRef = CFTypeRef + CoreFoundation.CFArrayRef = CFArrayRef + CoreFoundation.CFStringRef = CFStringRef + CoreFoundation.CFDictionaryRef = CFDictionaryRef + +except (AttributeError): + raise ImportError('Error initializing ctypes') + + +class CFConst(object): + """ + A class object that acts as essentially a namespace for CoreFoundation + constants. + """ + kCFStringEncodingUTF8 = CFStringEncoding(0x08000100) + + +class SecurityConst(object): + """ + A class object that acts as essentially a namespace for Security constants. + """ + kSSLSessionOptionBreakOnServerAuth = 0 + + kSSLProtocol2 = 1 + kSSLProtocol3 = 2 + kTLSProtocol1 = 4 + kTLSProtocol11 = 7 + kTLSProtocol12 = 8 + + kSSLClientSide = 1 + kSSLStreamType = 0 + + kSecFormatPEMSequence = 10 + + kSecTrustResultInvalid = 0 + kSecTrustResultProceed = 1 + # This gap is present on purpose: this was kSecTrustResultConfirm, which + # is deprecated. + kSecTrustResultDeny = 3 + kSecTrustResultUnspecified = 4 + kSecTrustResultRecoverableTrustFailure = 5 + kSecTrustResultFatalTrustFailure = 6 + kSecTrustResultOtherError = 7 + + errSSLProtocol = -9800 + errSSLWouldBlock = -9803 + errSSLClosedGraceful = -9805 + errSSLClosedNoNotify = -9816 + errSSLClosedAbort = -9806 + + errSSLXCertChainInvalid = -9807 + errSSLCrypto = -9809 + errSSLInternal = -9810 + errSSLCertExpired = -9814 + errSSLCertNotYetValid = -9815 + errSSLUnknownRootCert = -9812 + errSSLNoRootCert = -9813 + errSSLHostNameMismatch = -9843 + errSSLPeerHandshakeFail = -9824 + errSSLPeerUserCancelled = -9839 + errSSLWeakPeerEphemeralDHKey = -9850 + errSSLServerAuthCompleted = -9841 + errSSLRecordOverflow = -9847 + + errSecVerifyFailed = -67808 + errSecNoTrustSettings = -25263 + errSecItemNotFound = -25300 + errSecInvalidTrustSettings = -25262 + + # Cipher suites. We only pick the ones our default cipher string allows. + TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 = 0xC02C + TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 = 0xC030 + TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 = 0xC02B + TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 = 0xC02F + TLS_DHE_DSS_WITH_AES_256_GCM_SHA384 = 0x00A3 + TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 = 0x009F + TLS_DHE_DSS_WITH_AES_128_GCM_SHA256 = 0x00A2 + TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 = 0x009E + TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 = 0xC024 + TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 = 0xC028 + TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA = 0xC00A + TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA = 0xC014 + TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 = 0x006B + TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 = 0x006A + TLS_DHE_RSA_WITH_AES_256_CBC_SHA = 0x0039 + TLS_DHE_DSS_WITH_AES_256_CBC_SHA = 0x0038 + TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 = 0xC023 + TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 = 0xC027 + TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA = 0xC009 + TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA = 0xC013 + TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 = 0x0067 + TLS_DHE_DSS_WITH_AES_128_CBC_SHA256 = 0x0040 + TLS_DHE_RSA_WITH_AES_128_CBC_SHA = 0x0033 + TLS_DHE_DSS_WITH_AES_128_CBC_SHA = 0x0032 + TLS_RSA_WITH_AES_256_GCM_SHA384 = 0x009D + TLS_RSA_WITH_AES_128_GCM_SHA256 = 0x009C + TLS_RSA_WITH_AES_256_CBC_SHA256 = 0x003D + TLS_RSA_WITH_AES_128_CBC_SHA256 = 0x003C + TLS_RSA_WITH_AES_256_CBC_SHA = 0x0035 + TLS_RSA_WITH_AES_128_CBC_SHA = 0x002F diff -Nru python-urllib3-1.19.1/urllib3/contrib/_securetransport/low_level.py python-urllib3-1.21.1/urllib3/contrib/_securetransport/low_level.py --- python-urllib3-1.19.1/urllib3/contrib/_securetransport/low_level.py 1970-01-01 00:00:00.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/contrib/_securetransport/low_level.py 2017-04-25 11:28:36.000000000 +0000 @@ -0,0 +1,343 @@ +""" +Low-level helpers for the SecureTransport bindings. + +These are Python functions that are not directly related to the high-level APIs +but are necessary to get them to work. They include a whole bunch of low-level +CoreFoundation messing about and memory management. The concerns in this module +are almost entirely about trying to avoid memory leaks and providing +appropriate and useful assistance to the higher-level code. +""" +import base64 +import ctypes +import itertools +import re +import os +import ssl +import tempfile + +from .bindings import Security, CoreFoundation, CFConst + + +# This regular expression is used to grab PEM data out of a PEM bundle. +_PEM_CERTS_RE = re.compile( + b"-----BEGIN CERTIFICATE-----\n(.*?)\n-----END CERTIFICATE-----", re.DOTALL +) + + +def _cf_data_from_bytes(bytestring): + """ + Given a bytestring, create a CFData object from it. This CFData object must + be CFReleased by the caller. + """ + return CoreFoundation.CFDataCreate( + CoreFoundation.kCFAllocatorDefault, bytestring, len(bytestring) + ) + + +def _cf_dictionary_from_tuples(tuples): + """ + Given a list of Python tuples, create an associated CFDictionary. + """ + dictionary_size = len(tuples) + + # We need to get the dictionary keys and values out in the same order. + keys = (t[0] for t in tuples) + values = (t[1] for t in tuples) + cf_keys = (CoreFoundation.CFTypeRef * dictionary_size)(*keys) + cf_values = (CoreFoundation.CFTypeRef * dictionary_size)(*values) + + return CoreFoundation.CFDictionaryCreate( + CoreFoundation.kCFAllocatorDefault, + cf_keys, + cf_values, + dictionary_size, + CoreFoundation.kCFTypeDictionaryKeyCallBacks, + CoreFoundation.kCFTypeDictionaryValueCallBacks, + ) + + +def _cf_string_to_unicode(value): + """ + Creates a Unicode string from a CFString object. Used entirely for error + reporting. + + Yes, it annoys me quite a lot that this function is this complex. + """ + value_as_void_p = ctypes.cast(value, ctypes.POINTER(ctypes.c_void_p)) + + string = CoreFoundation.CFStringGetCStringPtr( + value_as_void_p, + CFConst.kCFStringEncodingUTF8 + ) + if string is None: + buffer = ctypes.create_string_buffer(1024) + result = CoreFoundation.CFStringGetCString( + value_as_void_p, + buffer, + 1024, + CFConst.kCFStringEncodingUTF8 + ) + if not result: + raise OSError('Error copying C string from CFStringRef') + string = buffer.value + if string is not None: + string = string.decode('utf-8') + return string + + +def _assert_no_error(error, exception_class=None): + """ + Checks the return code and throws an exception if there is an error to + report + """ + if error == 0: + return + + cf_error_string = Security.SecCopyErrorMessageString(error, None) + output = _cf_string_to_unicode(cf_error_string) + CoreFoundation.CFRelease(cf_error_string) + + if output is None or output == u'': + output = u'OSStatus %s' % error + + if exception_class is None: + exception_class = ssl.SSLError + + raise exception_class(output) + + +def _cert_array_from_pem(pem_bundle): + """ + Given a bundle of certs in PEM format, turns them into a CFArray of certs + that can be used to validate a cert chain. + """ + der_certs = [ + base64.b64decode(match.group(1)) + for match in _PEM_CERTS_RE.finditer(pem_bundle) + ] + if not der_certs: + raise ssl.SSLError("No root certificates specified") + + cert_array = CoreFoundation.CFArrayCreateMutable( + CoreFoundation.kCFAllocatorDefault, + 0, + ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks) + ) + if not cert_array: + raise ssl.SSLError("Unable to allocate memory!") + + try: + for der_bytes in der_certs: + certdata = _cf_data_from_bytes(der_bytes) + if not certdata: + raise ssl.SSLError("Unable to allocate memory!") + cert = Security.SecCertificateCreateWithData( + CoreFoundation.kCFAllocatorDefault, certdata + ) + CoreFoundation.CFRelease(certdata) + if not cert: + raise ssl.SSLError("Unable to build cert object!") + + CoreFoundation.CFArrayAppendValue(cert_array, cert) + CoreFoundation.CFRelease(cert) + except Exception: + # We need to free the array before the exception bubbles further. + # We only want to do that if an error occurs: otherwise, the caller + # should free. + CoreFoundation.CFRelease(cert_array) + + return cert_array + + +def _is_cert(item): + """ + Returns True if a given CFTypeRef is a certificate. + """ + expected = Security.SecCertificateGetTypeID() + return CoreFoundation.CFGetTypeID(item) == expected + + +def _is_identity(item): + """ + Returns True if a given CFTypeRef is an identity. + """ + expected = Security.SecIdentityGetTypeID() + return CoreFoundation.CFGetTypeID(item) == expected + + +def _temporary_keychain(): + """ + This function creates a temporary Mac keychain that we can use to work with + credentials. This keychain uses a one-time password and a temporary file to + store the data. We expect to have one keychain per socket. The returned + SecKeychainRef must be freed by the caller, including calling + SecKeychainDelete. + + Returns a tuple of the SecKeychainRef and the path to the temporary + directory that contains it. + """ + # Unfortunately, SecKeychainCreate requires a path to a keychain. This + # means we cannot use mkstemp to use a generic temporary file. Instead, + # we're going to create a temporary directory and a filename to use there. + # This filename will be 8 random bytes expanded into base64. We also need + # some random bytes to password-protect the keychain we're creating, so we + # ask for 40 random bytes. + random_bytes = os.urandom(40) + filename = base64.b64encode(random_bytes[:8]).decode('utf-8') + password = base64.b64encode(random_bytes[8:]) # Must be valid UTF-8 + tempdirectory = tempfile.mkdtemp() + + keychain_path = os.path.join(tempdirectory, filename).encode('utf-8') + + # We now want to create the keychain itself. + keychain = Security.SecKeychainRef() + status = Security.SecKeychainCreate( + keychain_path, + len(password), + password, + False, + None, + ctypes.byref(keychain) + ) + _assert_no_error(status) + + # Having created the keychain, we want to pass it off to the caller. + return keychain, tempdirectory + + +def _load_items_from_file(keychain, path): + """ + Given a single file, loads all the trust objects from it into arrays and + the keychain. + Returns a tuple of lists: the first list is a list of identities, the + second a list of certs. + """ + certificates = [] + identities = [] + result_array = None + + with open(path, 'rb') as f: + raw_filedata = f.read() + + try: + filedata = CoreFoundation.CFDataCreate( + CoreFoundation.kCFAllocatorDefault, + raw_filedata, + len(raw_filedata) + ) + result_array = CoreFoundation.CFArrayRef() + result = Security.SecItemImport( + filedata, # cert data + None, # Filename, leaving it out for now + None, # What the type of the file is, we don't care + None, # what's in the file, we don't care + 0, # import flags + None, # key params, can include passphrase in the future + keychain, # The keychain to insert into + ctypes.byref(result_array) # Results + ) + _assert_no_error(result) + + # A CFArray is not very useful to us as an intermediary + # representation, so we are going to extract the objects we want + # and then free the array. We don't need to keep hold of keys: the + # keychain already has them! + result_count = CoreFoundation.CFArrayGetCount(result_array) + for index in range(result_count): + item = CoreFoundation.CFArrayGetValueAtIndex( + result_array, index + ) + item = ctypes.cast(item, CoreFoundation.CFTypeRef) + + if _is_cert(item): + CoreFoundation.CFRetain(item) + certificates.append(item) + elif _is_identity(item): + CoreFoundation.CFRetain(item) + identities.append(item) + finally: + if result_array: + CoreFoundation.CFRelease(result_array) + + CoreFoundation.CFRelease(filedata) + + return (identities, certificates) + + +def _load_client_cert_chain(keychain, *paths): + """ + Load certificates and maybe keys from a number of files. Has the end goal + of returning a CFArray containing one SecIdentityRef, and then zero or more + SecCertificateRef objects, suitable for use as a client certificate trust + chain. + """ + # Ok, the strategy. + # + # This relies on knowing that macOS will not give you a SecIdentityRef + # unless you have imported a key into a keychain. This is a somewhat + # artificial limitation of macOS (for example, it doesn't necessarily + # affect iOS), but there is nothing inside Security.framework that lets you + # get a SecIdentityRef without having a key in a keychain. + # + # So the policy here is we take all the files and iterate them in order. + # Each one will use SecItemImport to have one or more objects loaded from + # it. We will also point at a keychain that macOS can use to work with the + # private key. + # + # Once we have all the objects, we'll check what we actually have. If we + # already have a SecIdentityRef in hand, fab: we'll use that. Otherwise, + # we'll take the first certificate (which we assume to be our leaf) and + # ask the keychain to give us a SecIdentityRef with that cert's associated + # key. + # + # We'll then return a CFArray containing the trust chain: one + # SecIdentityRef and then zero-or-more SecCertificateRef objects. The + # responsibility for freeing this CFArray will be with the caller. This + # CFArray must remain alive for the entire connection, so in practice it + # will be stored with a single SSLSocket, along with the reference to the + # keychain. + certificates = [] + identities = [] + + # Filter out bad paths. + paths = (path for path in paths if path) + + try: + for file_path in paths: + new_identities, new_certs = _load_items_from_file( + keychain, file_path + ) + identities.extend(new_identities) + certificates.extend(new_certs) + + # Ok, we have everything. The question is: do we have an identity? If + # not, we want to grab one from the first cert we have. + if not identities: + new_identity = Security.SecIdentityRef() + status = Security.SecIdentityCreateWithCertificate( + keychain, + certificates[0], + ctypes.byref(new_identity) + ) + _assert_no_error(status) + identities.append(new_identity) + + # We now want to release the original certificate, as we no longer + # need it. + CoreFoundation.CFRelease(certificates.pop(0)) + + # We now need to build a new CFArray that holds the trust chain. + trust_chain = CoreFoundation.CFArrayCreateMutable( + CoreFoundation.kCFAllocatorDefault, + 0, + ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks), + ) + for item in itertools.chain(identities, certificates): + # ArrayAppendValue does a CFRetain on the item. That's fine, + # because the finally block will release our other refs to them. + CoreFoundation.CFArrayAppendValue(trust_chain, item) + + return trust_chain + finally: + for obj in itertools.chain(identities, certificates): + CoreFoundation.CFRelease(obj) diff -Nru python-urllib3-1.19.1/urllib3/contrib/securetransport.py python-urllib3-1.21.1/urllib3/contrib/securetransport.py --- python-urllib3-1.19.1/urllib3/contrib/securetransport.py 1970-01-01 00:00:00.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/contrib/securetransport.py 2017-05-02 10:56:31.000000000 +0000 @@ -0,0 +1,807 @@ +""" +SecureTranport support for urllib3 via ctypes. + +This makes platform-native TLS available to urllib3 users on macOS without the +use of a compiler. This is an important feature because the Python Package +Index is moving to become a TLSv1.2-or-higher server, and the default OpenSSL +that ships with macOS is not capable of doing TLSv1.2. The only way to resolve +this is to give macOS users an alternative solution to the problem, and that +solution is to use SecureTransport. + +We use ctypes here because this solution must not require a compiler. That's +because pip is not allowed to require a compiler either. + +This is not intended to be a seriously long-term solution to this problem. +The hope is that PEP 543 will eventually solve this issue for us, at which +point we can retire this contrib module. But in the short term, we need to +solve the impending tire fire that is Python on Mac without this kind of +contrib module. So...here we are. + +To use this module, simply import and inject it:: + + import urllib3.contrib.securetransport + urllib3.contrib.securetransport.inject_into_urllib3() + +Happy TLSing! +""" +from __future__ import absolute_import + +import contextlib +import ctypes +import errno +import os.path +import shutil +import socket +import ssl +import threading +import weakref + +from .. import util +from ._securetransport.bindings import ( + Security, SecurityConst, CoreFoundation +) +from ._securetransport.low_level import ( + _assert_no_error, _cert_array_from_pem, _temporary_keychain, + _load_client_cert_chain +) + +try: # Platform-specific: Python 2 + from socket import _fileobject +except ImportError: # Platform-specific: Python 3 + _fileobject = None + from ..packages.backports.makefile import backport_makefile + +try: + memoryview(b'') +except NameError: + raise ImportError("SecureTransport only works on Pythons with memoryview") + +__all__ = ['inject_into_urllib3', 'extract_from_urllib3'] + +# SNI always works +HAS_SNI = True + +orig_util_HAS_SNI = util.HAS_SNI +orig_util_SSLContext = util.ssl_.SSLContext + +# This dictionary is used by the read callback to obtain a handle to the +# calling wrapped socket. This is a pretty silly approach, but for now it'll +# do. I feel like I should be able to smuggle a handle to the wrapped socket +# directly in the SSLConnectionRef, but for now this approach will work I +# guess. +# +# We need to lock around this structure for inserts, but we don't do it for +# reads/writes in the callbacks. The reasoning here goes as follows: +# +# 1. It is not possible to call into the callbacks before the dictionary is +# populated, so once in the callback the id must be in the dictionary. +# 2. The callbacks don't mutate the dictionary, they only read from it, and +# so cannot conflict with any of the insertions. +# +# This is good: if we had to lock in the callbacks we'd drastically slow down +# the performance of this code. +_connection_refs = weakref.WeakValueDictionary() +_connection_ref_lock = threading.Lock() + +# Limit writes to 16kB. This is OpenSSL's limit, but we'll cargo-cult it over +# for no better reason than we need *a* limit, and this one is right there. +SSL_WRITE_BLOCKSIZE = 16384 + +# This is our equivalent of util.ssl_.DEFAULT_CIPHERS, but expanded out to +# individual cipher suites. We need to do this becuase this is how +# SecureTransport wants them. +CIPHER_SUITES = [ + SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, + SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, + SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, + SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, + SecurityConst.TLS_DHE_DSS_WITH_AES_256_GCM_SHA384, + SecurityConst.TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, + SecurityConst.TLS_DHE_DSS_WITH_AES_128_GCM_SHA256, + SecurityConst.TLS_DHE_RSA_WITH_AES_128_GCM_SHA256, + SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, + SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, + SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, + SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, + SecurityConst.TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, + SecurityConst.TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, + SecurityConst.TLS_DHE_RSA_WITH_AES_256_CBC_SHA, + SecurityConst.TLS_DHE_DSS_WITH_AES_256_CBC_SHA, + SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, + SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, + SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, + SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, + SecurityConst.TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, + SecurityConst.TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, + SecurityConst.TLS_DHE_RSA_WITH_AES_128_CBC_SHA, + SecurityConst.TLS_DHE_DSS_WITH_AES_128_CBC_SHA, + SecurityConst.TLS_RSA_WITH_AES_256_GCM_SHA384, + SecurityConst.TLS_RSA_WITH_AES_128_GCM_SHA256, + SecurityConst.TLS_RSA_WITH_AES_256_CBC_SHA256, + SecurityConst.TLS_RSA_WITH_AES_128_CBC_SHA256, + SecurityConst.TLS_RSA_WITH_AES_256_CBC_SHA, + SecurityConst.TLS_RSA_WITH_AES_128_CBC_SHA, +] + +# Basically this is simple: for PROTOCOL_SSLv23 we turn it into a low of +# TLSv1 and a high of TLSv1.2. For everything else, we pin to that version. +_protocol_to_min_max = { + ssl.PROTOCOL_SSLv23: (SecurityConst.kTLSProtocol1, SecurityConst.kTLSProtocol12), +} + +if hasattr(ssl, "PROTOCOL_SSLv2"): + _protocol_to_min_max[ssl.PROTOCOL_SSLv2] = ( + SecurityConst.kSSLProtocol2, SecurityConst.kSSLProtocol2 + ) +if hasattr(ssl, "PROTOCOL_SSLv3"): + _protocol_to_min_max[ssl.PROTOCOL_SSLv3] = ( + SecurityConst.kSSLProtocol3, SecurityConst.kSSLProtocol3 + ) +if hasattr(ssl, "PROTOCOL_TLSv1"): + _protocol_to_min_max[ssl.PROTOCOL_TLSv1] = ( + SecurityConst.kTLSProtocol1, SecurityConst.kTLSProtocol1 + ) +if hasattr(ssl, "PROTOCOL_TLSv1_1"): + _protocol_to_min_max[ssl.PROTOCOL_TLSv1_1] = ( + SecurityConst.kTLSProtocol11, SecurityConst.kTLSProtocol11 + ) +if hasattr(ssl, "PROTOCOL_TLSv1_2"): + _protocol_to_min_max[ssl.PROTOCOL_TLSv1_2] = ( + SecurityConst.kTLSProtocol12, SecurityConst.kTLSProtocol12 + ) +if hasattr(ssl, "PROTOCOL_TLS"): + _protocol_to_min_max[ssl.PROTOCOL_TLS] = _protocol_to_min_max[ssl.PROTOCOL_SSLv23] + + +def inject_into_urllib3(): + """ + Monkey-patch urllib3 with SecureTransport-backed SSL-support. + """ + util.ssl_.SSLContext = SecureTransportContext + util.HAS_SNI = HAS_SNI + util.ssl_.HAS_SNI = HAS_SNI + util.IS_SECURETRANSPORT = True + util.ssl_.IS_SECURETRANSPORT = True + + +def extract_from_urllib3(): + """ + Undo monkey-patching by :func:`inject_into_urllib3`. + """ + util.ssl_.SSLContext = orig_util_SSLContext + util.HAS_SNI = orig_util_HAS_SNI + util.ssl_.HAS_SNI = orig_util_HAS_SNI + util.IS_SECURETRANSPORT = False + util.ssl_.IS_SECURETRANSPORT = False + + +def _read_callback(connection_id, data_buffer, data_length_pointer): + """ + SecureTransport read callback. This is called by ST to request that data + be returned from the socket. + """ + wrapped_socket = None + try: + wrapped_socket = _connection_refs.get(connection_id) + if wrapped_socket is None: + return SecurityConst.errSSLInternal + base_socket = wrapped_socket.socket + + requested_length = data_length_pointer[0] + + timeout = wrapped_socket.gettimeout() + error = None + read_count = 0 + buffer = (ctypes.c_char * requested_length).from_address(data_buffer) + buffer_view = memoryview(buffer) + + try: + while read_count < requested_length: + if timeout is None or timeout >= 0: + readables = util.wait_for_read([base_socket], timeout) + if not readables: + raise socket.error(errno.EAGAIN, 'timed out') + + # We need to tell ctypes that we have a buffer that can be + # written to. Upsettingly, we do that like this: + chunk_size = base_socket.recv_into( + buffer_view[read_count:requested_length] + ) + read_count += chunk_size + if not chunk_size: + if not read_count: + return SecurityConst.errSSLClosedGraceful + break + except (socket.error) as e: + error = e.errno + + if error is not None and error != errno.EAGAIN: + if error == errno.ECONNRESET: + return SecurityConst.errSSLClosedAbort + raise + + data_length_pointer[0] = read_count + + if read_count != requested_length: + return SecurityConst.errSSLWouldBlock + + return 0 + except Exception as e: + if wrapped_socket is not None: + wrapped_socket._exception = e + return SecurityConst.errSSLInternal + + +def _write_callback(connection_id, data_buffer, data_length_pointer): + """ + SecureTransport write callback. This is called by ST to request that data + actually be sent on the network. + """ + wrapped_socket = None + try: + wrapped_socket = _connection_refs.get(connection_id) + if wrapped_socket is None: + return SecurityConst.errSSLInternal + base_socket = wrapped_socket.socket + + bytes_to_write = data_length_pointer[0] + data = ctypes.string_at(data_buffer, bytes_to_write) + + timeout = wrapped_socket.gettimeout() + error = None + sent = 0 + + try: + while sent < bytes_to_write: + if timeout is None or timeout >= 0: + writables = util.wait_for_write([base_socket], timeout) + if not writables: + raise socket.error(errno.EAGAIN, 'timed out') + chunk_sent = base_socket.send(data) + sent += chunk_sent + + # This has some needless copying here, but I'm not sure there's + # much value in optimising this data path. + data = data[chunk_sent:] + except (socket.error) as e: + error = e.errno + + if error is not None and error != errno.EAGAIN: + if error == errno.ECONNRESET: + return SecurityConst.errSSLClosedAbort + raise + + data_length_pointer[0] = sent + if sent != bytes_to_write: + return SecurityConst.errSSLWouldBlock + + return 0 + except Exception as e: + if wrapped_socket is not None: + wrapped_socket._exception = e + return SecurityConst.errSSLInternal + + +# We need to keep these two objects references alive: if they get GC'd while +# in use then SecureTransport could attempt to call a function that is in freed +# memory. That would be...uh...bad. Yeah, that's the word. Bad. +_read_callback_pointer = Security.SSLReadFunc(_read_callback) +_write_callback_pointer = Security.SSLWriteFunc(_write_callback) + + +class WrappedSocket(object): + """ + API-compatibility wrapper for Python's OpenSSL wrapped socket object. + + Note: _makefile_refs, _drop(), and _reuse() are needed for the garbage + collector of PyPy. + """ + def __init__(self, socket): + self.socket = socket + self.context = None + self._makefile_refs = 0 + self._closed = False + self._exception = None + self._keychain = None + self._keychain_dir = None + self._client_cert_chain = None + + # We save off the previously-configured timeout and then set it to + # zero. This is done because we use select and friends to handle the + # timeouts, but if we leave the timeout set on the lower socket then + # Python will "kindly" call select on that socket again for us. Avoid + # that by forcing the timeout to zero. + self._timeout = self.socket.gettimeout() + self.socket.settimeout(0) + + @contextlib.contextmanager + def _raise_on_error(self): + """ + A context manager that can be used to wrap calls that do I/O from + SecureTransport. If any of the I/O callbacks hit an exception, this + context manager will correctly propagate the exception after the fact. + This avoids silently swallowing those exceptions. + + It also correctly forces the socket closed. + """ + self._exception = None + + # We explicitly don't catch around this yield because in the unlikely + # event that an exception was hit in the block we don't want to swallow + # it. + yield + if self._exception is not None: + exception, self._exception = self._exception, None + self.close() + raise exception + + def _set_ciphers(self): + """ + Sets up the allowed ciphers. By default this matches the set in + util.ssl_.DEFAULT_CIPHERS, at least as supported by macOS. This is done + custom and doesn't allow changing at this time, mostly because parsing + OpenSSL cipher strings is going to be a freaking nightmare. + """ + ciphers = (Security.SSLCipherSuite * len(CIPHER_SUITES))(*CIPHER_SUITES) + result = Security.SSLSetEnabledCiphers( + self.context, ciphers, len(CIPHER_SUITES) + ) + _assert_no_error(result) + + def _custom_validate(self, verify, trust_bundle): + """ + Called when we have set custom validation. We do this in two cases: + first, when cert validation is entirely disabled; and second, when + using a custom trust DB. + """ + # If we disabled cert validation, just say: cool. + if not verify: + return + + # We want data in memory, so load it up. + if os.path.isfile(trust_bundle): + with open(trust_bundle, 'rb') as f: + trust_bundle = f.read() + + cert_array = None + trust = Security.SecTrustRef() + + try: + # Get a CFArray that contains the certs we want. + cert_array = _cert_array_from_pem(trust_bundle) + + # Ok, now the hard part. We want to get the SecTrustRef that ST has + # created for this connection, shove our CAs into it, tell ST to + # ignore everything else it knows, and then ask if it can build a + # chain. This is a buuuunch of code. + result = Security.SSLCopyPeerTrust( + self.context, ctypes.byref(trust) + ) + _assert_no_error(result) + if not trust: + raise ssl.SSLError("Failed to copy trust reference") + + result = Security.SecTrustSetAnchorCertificates(trust, cert_array) + _assert_no_error(result) + + result = Security.SecTrustSetAnchorCertificatesOnly(trust, True) + _assert_no_error(result) + + trust_result = Security.SecTrustResultType() + result = Security.SecTrustEvaluate( + trust, ctypes.byref(trust_result) + ) + _assert_no_error(result) + finally: + if trust: + CoreFoundation.CFRelease(trust) + + if cert_array is None: + CoreFoundation.CFRelease(cert_array) + + # Ok, now we can look at what the result was. + successes = ( + SecurityConst.kSecTrustResultUnspecified, + SecurityConst.kSecTrustResultProceed + ) + if trust_result.value not in successes: + raise ssl.SSLError( + "certificate verify failed, error code: %d" % + trust_result.value + ) + + def handshake(self, + server_hostname, + verify, + trust_bundle, + min_version, + max_version, + client_cert, + client_key, + client_key_passphrase): + """ + Actually performs the TLS handshake. This is run automatically by + wrapped socket, and shouldn't be needed in user code. + """ + # First, we do the initial bits of connection setup. We need to create + # a context, set its I/O funcs, and set the connection reference. + self.context = Security.SSLCreateContext( + None, SecurityConst.kSSLClientSide, SecurityConst.kSSLStreamType + ) + result = Security.SSLSetIOFuncs( + self.context, _read_callback_pointer, _write_callback_pointer + ) + _assert_no_error(result) + + # Here we need to compute the handle to use. We do this by taking the + # id of self modulo 2**31 - 1. If this is already in the dictionary, we + # just keep incrementing by one until we find a free space. + with _connection_ref_lock: + handle = id(self) % 2147483647 + while handle in _connection_refs: + handle = (handle + 1) % 2147483647 + _connection_refs[handle] = self + + result = Security.SSLSetConnection(self.context, handle) + _assert_no_error(result) + + # If we have a server hostname, we should set that too. + if server_hostname: + if not isinstance(server_hostname, bytes): + server_hostname = server_hostname.encode('utf-8') + + result = Security.SSLSetPeerDomainName( + self.context, server_hostname, len(server_hostname) + ) + _assert_no_error(result) + + # Setup the ciphers. + self._set_ciphers() + + # Set the minimum and maximum TLS versions. + result = Security.SSLSetProtocolVersionMin(self.context, min_version) + _assert_no_error(result) + result = Security.SSLSetProtocolVersionMax(self.context, max_version) + _assert_no_error(result) + + # If there's a trust DB, we need to use it. We do that by telling + # SecureTransport to break on server auth. We also do that if we don't + # want to validate the certs at all: we just won't actually do any + # authing in that case. + if not verify or trust_bundle is not None: + result = Security.SSLSetSessionOption( + self.context, + SecurityConst.kSSLSessionOptionBreakOnServerAuth, + True + ) + _assert_no_error(result) + + # If there's a client cert, we need to use it. + if client_cert: + self._keychain, self._keychain_dir = _temporary_keychain() + self._client_cert_chain = _load_client_cert_chain( + self._keychain, client_cert, client_key + ) + result = Security.SSLSetCertificate( + self.context, self._client_cert_chain + ) + _assert_no_error(result) + + while True: + with self._raise_on_error(): + result = Security.SSLHandshake(self.context) + + if result == SecurityConst.errSSLWouldBlock: + raise socket.timeout("handshake timed out") + elif result == SecurityConst.errSSLServerAuthCompleted: + self._custom_validate(verify, trust_bundle) + continue + else: + _assert_no_error(result) + break + + def fileno(self): + return self.socket.fileno() + + # Copy-pasted from Python 3.5 source code + def _decref_socketios(self): + if self._makefile_refs > 0: + self._makefile_refs -= 1 + if self._closed: + self.close() + + def recv(self, bufsiz): + buffer = ctypes.create_string_buffer(bufsiz) + bytes_read = self.recv_into(buffer, bufsiz) + data = buffer[:bytes_read] + return data + + def recv_into(self, buffer, nbytes=None): + # Read short on EOF. + if self._closed: + return 0 + + if nbytes is None: + nbytes = len(buffer) + + buffer = (ctypes.c_char * nbytes).from_buffer(buffer) + processed_bytes = ctypes.c_size_t(0) + + with self._raise_on_error(): + result = Security.SSLRead( + self.context, buffer, nbytes, ctypes.byref(processed_bytes) + ) + + # There are some result codes that we want to treat as "not always + # errors". Specifically, those are errSSLWouldBlock, + # errSSLClosedGraceful, and errSSLClosedNoNotify. + if (result == SecurityConst.errSSLWouldBlock): + # If we didn't process any bytes, then this was just a time out. + # However, we can get errSSLWouldBlock in situations when we *did* + # read some data, and in those cases we should just read "short" + # and return. + if processed_bytes.value == 0: + # Timed out, no data read. + raise socket.timeout("recv timed out") + elif result in (SecurityConst.errSSLClosedGraceful, SecurityConst.errSSLClosedNoNotify): + # The remote peer has closed this connection. We should do so as + # well. Note that we don't actually return here because in + # principle this could actually be fired along with return data. + # It's unlikely though. + self.close() + else: + _assert_no_error(result) + + # Ok, we read and probably succeeded. We should return whatever data + # was actually read. + return processed_bytes.value + + def settimeout(self, timeout): + self._timeout = timeout + + def gettimeout(self): + return self._timeout + + def send(self, data): + processed_bytes = ctypes.c_size_t(0) + + with self._raise_on_error(): + result = Security.SSLWrite( + self.context, data, len(data), ctypes.byref(processed_bytes) + ) + + if result == SecurityConst.errSSLWouldBlock and processed_bytes.value == 0: + # Timed out + raise socket.timeout("send timed out") + else: + _assert_no_error(result) + + # We sent, and probably succeeded. Tell them how much we sent. + return processed_bytes.value + + def sendall(self, data): + total_sent = 0 + while total_sent < len(data): + sent = self.send(data[total_sent:total_sent + SSL_WRITE_BLOCKSIZE]) + total_sent += sent + + def shutdown(self): + with self._raise_on_error(): + Security.SSLClose(self.context) + + def close(self): + # TODO: should I do clean shutdown here? Do I have to? + if self._makefile_refs < 1: + self._closed = True + if self.context: + CoreFoundation.CFRelease(self.context) + self.context = None + if self._client_cert_chain: + CoreFoundation.CFRelease(self._client_cert_chain) + self._client_cert_chain = None + if self._keychain: + Security.SecKeychainDelete(self._keychain) + CoreFoundation.CFRelease(self._keychain) + shutil.rmtree(self._keychain_dir) + self._keychain = self._keychain_dir = None + return self.socket.close() + else: + self._makefile_refs -= 1 + + def getpeercert(self, binary_form=False): + # Urgh, annoying. + # + # Here's how we do this: + # + # 1. Call SSLCopyPeerTrust to get hold of the trust object for this + # connection. + # 2. Call SecTrustGetCertificateAtIndex for index 0 to get the leaf. + # 3. To get the CN, call SecCertificateCopyCommonName and process that + # string so that it's of the appropriate type. + # 4. To get the SAN, we need to do something a bit more complex: + # a. Call SecCertificateCopyValues to get the data, requesting + # kSecOIDSubjectAltName. + # b. Mess about with this dictionary to try to get the SANs out. + # + # This is gross. Really gross. It's going to be a few hundred LoC extra + # just to repeat something that SecureTransport can *already do*. So my + # operating assumption at this time is that what we want to do is + # instead to just flag to urllib3 that it shouldn't do its own hostname + # validation when using SecureTransport. + if not binary_form: + raise ValueError( + "SecureTransport only supports dumping binary certs" + ) + trust = Security.SecTrustRef() + certdata = None + der_bytes = None + + try: + # Grab the trust store. + result = Security.SSLCopyPeerTrust( + self.context, ctypes.byref(trust) + ) + _assert_no_error(result) + if not trust: + # Probably we haven't done the handshake yet. No biggie. + return None + + cert_count = Security.SecTrustGetCertificateCount(trust) + if not cert_count: + # Also a case that might happen if we haven't handshaked. + # Handshook? Handshaken? + return None + + leaf = Security.SecTrustGetCertificateAtIndex(trust, 0) + assert leaf + + # Ok, now we want the DER bytes. + certdata = Security.SecCertificateCopyData(leaf) + assert certdata + + data_length = CoreFoundation.CFDataGetLength(certdata) + data_buffer = CoreFoundation.CFDataGetBytePtr(certdata) + der_bytes = ctypes.string_at(data_buffer, data_length) + finally: + if certdata: + CoreFoundation.CFRelease(certdata) + if trust: + CoreFoundation.CFRelease(trust) + + return der_bytes + + def _reuse(self): + self._makefile_refs += 1 + + def _drop(self): + if self._makefile_refs < 1: + self.close() + else: + self._makefile_refs -= 1 + + +if _fileobject: # Platform-specific: Python 2 + def makefile(self, mode, bufsize=-1): + self._makefile_refs += 1 + return _fileobject(self, mode, bufsize, close=True) +else: # Platform-specific: Python 3 + def makefile(self, mode="r", buffering=None, *args, **kwargs): + # We disable buffering with SecureTransport because it conflicts with + # the buffering that ST does internally (see issue #1153 for more). + buffering = 0 + return backport_makefile(self, mode, buffering, *args, **kwargs) + +WrappedSocket.makefile = makefile + + +class SecureTransportContext(object): + """ + I am a wrapper class for the SecureTransport library, to translate the + interface of the standard library ``SSLContext`` object to calls into + SecureTransport. + """ + def __init__(self, protocol): + self._min_version, self._max_version = _protocol_to_min_max[protocol] + self._options = 0 + self._verify = False + self._trust_bundle = None + self._client_cert = None + self._client_key = None + self._client_key_passphrase = None + + @property + def check_hostname(self): + """ + SecureTransport cannot have its hostname checking disabled. For more, + see the comment on getpeercert() in this file. + """ + return True + + @check_hostname.setter + def check_hostname(self, value): + """ + SecureTransport cannot have its hostname checking disabled. For more, + see the comment on getpeercert() in this file. + """ + pass + + @property + def options(self): + # TODO: Well, crap. + # + # So this is the bit of the code that is the most likely to cause us + # trouble. Essentially we need to enumerate all of the SSL options that + # users might want to use and try to see if we can sensibly translate + # them, or whether we should just ignore them. + return self._options + + @options.setter + def options(self, value): + # TODO: Update in line with above. + self._options = value + + @property + def verify_mode(self): + return ssl.CERT_REQUIRED if self._verify else ssl.CERT_NONE + + @verify_mode.setter + def verify_mode(self, value): + self._verify = True if value == ssl.CERT_REQUIRED else False + + def set_default_verify_paths(self): + # So, this has to do something a bit weird. Specifically, what it does + # is nothing. + # + # This means that, if we had previously had load_verify_locations + # called, this does not undo that. We need to do that because it turns + # out that the rest of the urllib3 code will attempt to load the + # default verify paths if it hasn't been told about any paths, even if + # the context itself was sometime earlier. We resolve that by just + # ignoring it. + pass + + def load_default_certs(self): + return self.set_default_verify_paths() + + def set_ciphers(self, ciphers): + # For now, we just require the default cipher string. + if ciphers != util.ssl_.DEFAULT_CIPHERS: + raise ValueError( + "SecureTransport doesn't support custom cipher strings" + ) + + def load_verify_locations(self, cafile=None, capath=None, cadata=None): + # OK, we only really support cadata and cafile. + if capath is not None: + raise ValueError( + "SecureTransport does not support cert directories" + ) + + self._trust_bundle = cafile or cadata + + def load_cert_chain(self, certfile, keyfile=None, password=None): + self._client_cert = certfile + self._client_key = keyfile + self._client_cert_passphrase = password + + def wrap_socket(self, sock, server_side=False, + do_handshake_on_connect=True, suppress_ragged_eofs=True, + server_hostname=None): + # So, what do we do here? Firstly, we assert some properties. This is a + # stripped down shim, so there is some functionality we don't support. + # See PEP 543 for the real deal. + assert not server_side + assert do_handshake_on_connect + assert suppress_ragged_eofs + + # Ok, we're good to go. Now we want to create the wrapped socket object + # and store it in the appropriate place. + wrapped_socket = WrappedSocket(sock) + + # Now we can handshake + wrapped_socket.handshake( + server_hostname, self._verify, self._trust_bundle, + self._min_version, self._max_version, self._client_cert, + self._client_key, self._client_key_passphrase + ) + return wrapped_socket diff -Nru python-urllib3-1.19.1/urllib3/contrib/socks.py python-urllib3-1.21.1/urllib3/contrib/socks.py --- python-urllib3-1.19.1/urllib3/contrib/socks.py 2016-10-12 16:41:52.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/contrib/socks.py 2017-05-02 09:08:45.000000000 +0000 @@ -83,6 +83,7 @@ proxy_port=self._socks_options['proxy_port'], proxy_username=self._socks_options['username'], proxy_password=self._socks_options['password'], + proxy_rdns=self._socks_options['rdns'], timeout=self.timeout, **extra_kw ) @@ -153,8 +154,16 @@ if parsed.scheme == 'socks5': socks_version = socks.PROXY_TYPE_SOCKS5 + rdns = False + elif parsed.scheme == 'socks5h': + socks_version = socks.PROXY_TYPE_SOCKS5 + rdns = True elif parsed.scheme == 'socks4': socks_version = socks.PROXY_TYPE_SOCKS4 + rdns = False + elif parsed.scheme == 'socks4a': + socks_version = socks.PROXY_TYPE_SOCKS4 + rdns = True else: raise ValueError( "Unable to determine SOCKS version from %s" % proxy_url @@ -168,6 +177,7 @@ 'proxy_port': parsed.port, 'username': username, 'password': password, + 'rdns': rdns } connection_pool_kw['_socks_options'] = socks_options diff -Nru python-urllib3-1.19.1/urllib3/exceptions.py python-urllib3-1.21.1/urllib3/exceptions.py --- python-urllib3-1.19.1/urllib3/exceptions.py 2016-11-03 15:16:16.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/exceptions.py 2017-05-02 09:08:45.000000000 +0000 @@ -239,3 +239,8 @@ def __init__(self, defects, unparsed_data): message = '%s, unparsed data: %r' % (defects or 'Unknown', unparsed_data) super(HeaderParsingError, self).__init__(message) + + +class UnrewindableBodyError(HTTPError): + "urllib3 encountered an error when trying to rewind a body" + pass diff -Nru python-urllib3-1.19.1/urllib3/__init__.py python-urllib3-1.21.1/urllib3/__init__.py --- python-urllib3-1.19.1/urllib3/__init__.py 2016-11-16 10:03:57.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/__init__.py 2017-05-02 10:56:31.000000000 +0000 @@ -32,7 +32,7 @@ __author__ = 'Andrey Petrov (andrey.petrov@shazow.net)' __license__ = 'MIT' -__version__ = '1.19.1' +__version__ = '1.21.1' __all__ = ( 'HTTPConnectionPool', diff -Nru python-urllib3-1.19.1/urllib3/poolmanager.py python-urllib3-1.21.1/urllib3/poolmanager.py --- python-urllib3-1.19.1/urllib3/poolmanager.py 2016-11-03 13:44:18.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/poolmanager.py 2017-05-02 10:56:31.000000000 +0000 @@ -21,28 +21,42 @@ SSL_KEYWORDS = ('key_file', 'cert_file', 'cert_reqs', 'ca_certs', 'ssl_version', 'ca_cert_dir', 'ssl_context') -# The base fields to use when determining what pool to get a connection from; -# these do not rely on the ``connection_pool_kw`` and can be determined by the -# URL and potentially the ``urllib3.connection.port_by_scheme`` dictionary. -# -# All custom key schemes should include the fields in this key at a minimum. -BasePoolKey = collections.namedtuple('BasePoolKey', ('scheme', 'host', 'port')) - -# The fields to use when determining what pool to get a HTTP and HTTPS -# connection from. All additional fields must be present in the PoolManager's -# ``connection_pool_kw`` instance variable. -HTTPPoolKey = collections.namedtuple( - 'HTTPPoolKey', BasePoolKey._fields + ('timeout', 'retries', 'strict', - 'block', 'source_address') -) -HTTPSPoolKey = collections.namedtuple( - 'HTTPSPoolKey', HTTPPoolKey._fields + SSL_KEYWORDS +# All known keyword arguments that could be provided to the pool manager, its +# pools, or the underlying connections. This is used to construct a pool key. +_key_fields = ( + 'key_scheme', # str + 'key_host', # str + 'key_port', # int + 'key_timeout', # int or float or Timeout + 'key_retries', # int or Retry + 'key_strict', # bool + 'key_block', # bool + 'key_source_address', # str + 'key_key_file', # str + 'key_cert_file', # str + 'key_cert_reqs', # str + 'key_ca_certs', # str + 'key_ssl_version', # str + 'key_ca_cert_dir', # str + 'key_ssl_context', # instance of ssl.SSLContext or urllib3.util.ssl_.SSLContext + 'key_maxsize', # int + 'key_headers', # dict + 'key__proxy', # parsed proxy url + 'key__proxy_headers', # dict + 'key_socket_options', # list of (level (int), optname (int), value (int or str)) tuples + 'key__socks_options', # dict + 'key_assert_hostname', # bool or string + 'key_assert_fingerprint', # str ) +#: The namedtuple class used to construct keys for the connection pool. +#: All custom key schemes should include the fields in this key at a minimum. +PoolKey = collections.namedtuple('PoolKey', _key_fields) + def _default_key_normalizer(key_class, request_context): """ - Create a pool key of type ``key_class`` for a request. + Create a pool key out of a request context dictionary. According to RFC 3986, both the scheme and host are case-insensitive. Therefore, this function normalizes both before constructing the pool @@ -52,26 +66,50 @@ :param key_class: The class to use when constructing the key. This should be a namedtuple with the ``scheme`` and ``host`` keys at a minimum. - + :type key_class: namedtuple :param request_context: A dictionary-like object that contain the context for a request. - It should contain a key for each field in the :class:`HTTPPoolKey` + :type request_context: dict + + :return: A namedtuple that can be used as a connection pool key. + :rtype: PoolKey """ - context = {} - for key in key_class._fields: - context[key] = request_context.get(key) + # Since we mutate the dictionary, make a copy first + context = request_context.copy() context['scheme'] = context['scheme'].lower() context['host'] = context['host'].lower() + + # These are both dictionaries and need to be transformed into frozensets + for key in ('headers', '_proxy_headers', '_socks_options'): + if key in context and context[key] is not None: + context[key] = frozenset(context[key].items()) + + # The socket_options key may be a list and needs to be transformed into a + # tuple. + socket_opts = context.get('socket_options') + if socket_opts is not None: + context['socket_options'] = tuple(socket_opts) + + # Map the kwargs to the names in the namedtuple - this is necessary since + # namedtuples can't have fields starting with '_'. + for key in list(context.keys()): + context['key_' + key] = context.pop(key) + + # Default to ``None`` for keys missing from the context + for field in key_class._fields: + if field not in context: + context[field] = None + return key_class(**context) -# A dictionary that maps a scheme to a callable that creates a pool key. -# This can be used to alter the way pool keys are constructed, if desired. -# Each PoolManager makes a copy of this dictionary so they can be configured -# globally here, or individually on the instance. +#: A dictionary that maps a scheme to a callable that creates a pool key. +#: This can be used to alter the way pool keys are constructed, if desired. +#: Each PoolManager makes a copy of this dictionary so they can be configured +#: globally here, or individually on the instance. key_fn_by_scheme = { - 'http': functools.partial(_default_key_normalizer, HTTPPoolKey), - 'https': functools.partial(_default_key_normalizer, HTTPSPoolKey), + 'http': functools.partial(_default_key_normalizer, PoolKey), + 'https': functools.partial(_default_key_normalizer, PoolKey), } pool_classes_by_scheme = { @@ -93,7 +131,7 @@ Headers to include with all requests, unless other headers are given explicitly. - :param \**connection_pool_kw: + :param \\**connection_pool_kw: Additional parameters are used to create fresh :class:`urllib3.connectionpool.ConnectionPool` instances. @@ -129,22 +167,32 @@ # Return False to re-raise any potential exceptions return False - def _new_pool(self, scheme, host, port): + def _new_pool(self, scheme, host, port, request_context=None): """ - Create a new :class:`ConnectionPool` based on host, port and scheme. + Create a new :class:`ConnectionPool` based on host, port, scheme, and + any additional pool keyword arguments. - This method is used to actually create the connection pools handed out - by :meth:`connection_from_url` and companion methods. It is intended - to be overridden for customization. + If ``request_context`` is provided, it is provided as keyword arguments + to the pool class used. This method is used to actually create the + connection pools handed out by :meth:`connection_from_url` and + companion methods. It is intended to be overridden for customization. """ pool_cls = self.pool_classes_by_scheme[scheme] - kwargs = self.connection_pool_kw + if request_context is None: + request_context = self.connection_pool_kw.copy() + + # Although the context has everything necessary to create the pool, + # this function has historically only used the scheme, host, and port + # in the positional args. When an API change is acceptable these can + # be removed. + for key in ('scheme', 'host', 'port'): + request_context.pop(key, None) + if scheme == 'http': - kwargs = self.connection_pool_kw.copy() for kw in SSL_KEYWORDS: - kwargs.pop(kw, None) + request_context.pop(kw, None) - return pool_cls(host, port, **kwargs) + return pool_cls(host, port, **request_context) def clear(self): """ @@ -155,18 +203,21 @@ """ self.pools.clear() - def connection_from_host(self, host, port=None, scheme='http'): + def connection_from_host(self, host, port=None, scheme='http', pool_kwargs=None): """ Get a :class:`ConnectionPool` based on the host, port, and scheme. If ``port`` isn't given, it will be derived from the ``scheme`` using - ``urllib3.connectionpool.port_by_scheme``. + ``urllib3.connectionpool.port_by_scheme``. If ``pool_kwargs`` is + provided, it is merged with the instance's ``connection_pool_kw`` + variable and used to create the new connection pool, if one is + needed. """ if not host: raise LocationValueError("No host specified.") - request_context = self.connection_pool_kw.copy() + request_context = self._merge_pool_kwargs(pool_kwargs) request_context['scheme'] = scheme or 'http' if not port: port = port_by_scheme.get(request_context['scheme'].lower(), 80) @@ -186,9 +237,9 @@ pool_key_constructor = self.key_fn_by_scheme[scheme] pool_key = pool_key_constructor(request_context) - return self.connection_from_pool_key(pool_key) + return self.connection_from_pool_key(pool_key, request_context=request_context) - def connection_from_pool_key(self, pool_key): + def connection_from_pool_key(self, pool_key, request_context=None): """ Get a :class:`ConnectionPool` based on the provided pool key. @@ -204,22 +255,48 @@ return pool # Make a fresh ConnectionPool of the desired type - pool = self._new_pool(pool_key.scheme, pool_key.host, pool_key.port) + scheme = request_context['scheme'] + host = request_context['host'] + port = request_context['port'] + pool = self._new_pool(scheme, host, port, request_context=request_context) self.pools[pool_key] = pool return pool - def connection_from_url(self, url): + def connection_from_url(self, url, pool_kwargs=None): """ - Similar to :func:`urllib3.connectionpool.connection_from_url` but - doesn't pass any additional parameters to the - :class:`urllib3.connectionpool.ConnectionPool` constructor. + Similar to :func:`urllib3.connectionpool.connection_from_url`. - Additional parameters are taken from the :class:`.PoolManager` - constructor. + If ``pool_kwargs`` is not provided and a new pool needs to be + constructed, ``self.connection_pool_kw`` is used to initialize + the :class:`urllib3.connectionpool.ConnectionPool`. If ``pool_kwargs`` + is provided, it is used instead. Note that if a new pool does not + need to be created for the request, the provided ``pool_kwargs`` are + not used. """ u = parse_url(url) - return self.connection_from_host(u.host, port=u.port, scheme=u.scheme) + return self.connection_from_host(u.host, port=u.port, scheme=u.scheme, + pool_kwargs=pool_kwargs) + + def _merge_pool_kwargs(self, override): + """ + Merge a dictionary of override values for self.connection_pool_kw. + + This does not modify self.connection_pool_kw and returns a new dict. + Any keys in the override dictionary with a value of ``None`` are + removed from the merged dictionary. + """ + base_pool_kwargs = self.connection_pool_kw.copy() + if override: + for key, value in override.items(): + if value is None: + try: + del base_pool_kwargs[key] + except KeyError: + pass + else: + base_pool_kwargs[key] = value + return base_pool_kwargs def urlopen(self, method, url, redirect=True, **kw): """ @@ -322,13 +399,13 @@ super(ProxyManager, self).__init__( num_pools, headers, **connection_pool_kw) - def connection_from_host(self, host, port=None, scheme='http'): + def connection_from_host(self, host, port=None, scheme='http', pool_kwargs=None): if scheme == "https": return super(ProxyManager, self).connection_from_host( - host, port, scheme) + host, port, scheme, pool_kwargs=pool_kwargs) return super(ProxyManager, self).connection_from_host( - self.proxy.host, self.proxy.port, self.proxy.scheme) + self.proxy.host, self.proxy.port, self.proxy.scheme, pool_kwargs=pool_kwargs) def _set_proxy_headers(self, url, headers=None): """ diff -Nru python-urllib3-1.19.1/urllib3/response.py python-urllib3-1.21.1/urllib3/response.py --- python-urllib3-1.19.1/urllib3/response.py 2016-11-03 15:16:16.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/response.py 2017-05-02 09:08:45.000000000 +0000 @@ -38,7 +38,11 @@ self._data += data try: - return self._obj.decompress(data) + decompressed = self._obj.decompress(data) + if decompressed: + self._first_try = False + self._data = None + return decompressed except zlib.error: self._first_try = False self._obj = zlib.decompressobj(-zlib.MAX_WBITS) diff -Nru python-urllib3-1.19.1/urllib3/util/connection.py python-urllib3-1.21.1/urllib3/util/connection.py --- python-urllib3-1.19.1/urllib3/util/connection.py 2016-11-16 10:03:57.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/util/connection.py 2017-05-02 09:08:45.000000000 +0000 @@ -1,13 +1,7 @@ from __future__ import absolute_import import socket -try: - from select import poll, POLLIN -except ImportError: # `poll` doesn't exist on OSX and other platforms - poll = False - try: - from select import select - except ImportError: # `select` doesn't exist on AppEngine. - select = False +from .wait import wait_for_read +from .selectors import HAS_SELECT, SelectorError def is_connection_dropped(conn): # Platform-specific @@ -26,22 +20,13 @@ if sock is None: # Connection already closed (such as by httplib). return True - if not poll: - if not select: # Platform-specific: AppEngine - return False - - try: - return select([sock], [], [], 0.0)[0] - except socket.error: - return True - - # This version is better on platforms that support it. - p = poll() - p.register(sock, POLLIN) - for (fno, ev) in p.poll(0.0): - if fno == sock.fileno(): - # Either data is buffered (bad), or the connection is dropped. - return True + if not HAS_SELECT: + return False + + try: + return bool(wait_for_read(sock, timeout=0.0)) + except SelectorError: + return True # This function is copied from socket.py in the Python 2.7 standard diff -Nru python-urllib3-1.19.1/urllib3/util/__init__.py python-urllib3-1.21.1/urllib3/util/__init__.py --- python-urllib3-1.19.1/urllib3/util/__init__.py 2016-11-16 09:43:02.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/util/__init__.py 2017-04-25 11:28:36.000000000 +0000 @@ -7,6 +7,7 @@ SSLContext, HAS_SNI, IS_PYOPENSSL, + IS_SECURETRANSPORT, assert_fingerprint, resolve_cert_reqs, resolve_ssl_version, @@ -24,10 +25,15 @@ split_first, Url, ) +from .wait import ( + wait_for_read, + wait_for_write +) __all__ = ( 'HAS_SNI', 'IS_PYOPENSSL', + 'IS_SECURETRANSPORT', 'SSLContext', 'Retry', 'Timeout', @@ -43,4 +49,6 @@ 'resolve_ssl_version', 'split_first', 'ssl_wrap_socket', + 'wait_for_read', + 'wait_for_write' ) diff -Nru python-urllib3-1.19.1/urllib3/util/request.py python-urllib3-1.21.1/urllib3/util/request.py --- python-urllib3-1.19.1/urllib3/util/request.py 2016-09-12 09:02:32.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/util/request.py 2017-04-25 11:10:19.000000000 +0000 @@ -1,9 +1,11 @@ from __future__ import absolute_import from base64 import b64encode -from ..packages.six import b +from ..packages.six import b, integer_types +from ..exceptions import UnrewindableBodyError ACCEPT_ENCODING = 'gzip,deflate' +_FAILEDTELL = object() def make_headers(keep_alive=None, accept_encoding=None, user_agent=None, @@ -70,3 +72,47 @@ headers['cache-control'] = 'no-cache' return headers + + +def set_file_position(body, pos): + """ + If a position is provided, move file to that point. + Otherwise, we'll attempt to record a position for future use. + """ + if pos is not None: + rewind_body(body, pos) + elif getattr(body, 'tell', None) is not None: + try: + pos = body.tell() + except (IOError, OSError): + # This differentiates from None, allowing us to catch + # a failed `tell()` later when trying to rewind the body. + pos = _FAILEDTELL + + return pos + + +def rewind_body(body, body_pos): + """ + Attempt to rewind body to a certain position. + Primarily used for request redirects and retries. + + :param body: + File-like object that supports seek. + + :param int pos: + Position to seek to in file. + """ + body_seek = getattr(body, 'seek', None) + if body_seek is not None and isinstance(body_pos, integer_types): + try: + body_seek(body_pos) + except (IOError, OSError): + raise UnrewindableBodyError("An error occurred when rewinding request " + "body for redirect/retry.") + elif body_pos is _FAILEDTELL: + raise UnrewindableBodyError("Unable to record file position for rewinding " + "request body during a redirect/retry.") + else: + raise ValueError("body_pos must be of type integer, " + "instead it was %s." % type(body_pos)) diff -Nru python-urllib3-1.19.1/urllib3/util/retry.py python-urllib3-1.21.1/urllib3/util/retry.py --- python-urllib3-1.19.1/urllib3/util/retry.py 2016-11-03 15:16:16.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/util/retry.py 2017-04-25 11:28:36.000000000 +0000 @@ -85,6 +85,14 @@ Set to ``False`` to disable and imply ``raise_on_redirect=False``. + :param int status: + How many times to retry on bad status codes. + + These are retries made on responses, where status code matches + ``status_forcelist``. + + Set to ``0`` to fail on the first retry of this type. + :param iterable method_whitelist: Set of uppercased HTTP method verbs that we should retry on. @@ -141,7 +149,7 @@ #: Maximum backoff time. BACKOFF_MAX = 120 - def __init__(self, total=10, connect=None, read=None, redirect=None, + def __init__(self, total=10, connect=None, read=None, redirect=None, status=None, method_whitelist=DEFAULT_METHOD_WHITELIST, status_forcelist=None, backoff_factor=0, raise_on_redirect=True, raise_on_status=True, history=None, respect_retry_after_header=True): @@ -149,6 +157,7 @@ self.total = total self.connect = connect self.read = read + self.status = status if redirect is False or total is False: redirect = 0 @@ -166,7 +175,7 @@ def new(self, **kw): params = dict( total=self.total, - connect=self.connect, read=self.read, redirect=self.redirect, + connect=self.connect, read=self.read, redirect=self.redirect, status=self.status, method_whitelist=self.method_whitelist, status_forcelist=self.status_forcelist, backoff_factor=self.backoff_factor, @@ -273,12 +282,25 @@ """ return isinstance(err, (ReadTimeoutError, ProtocolError)) - def is_retry(self, method, status_code, has_retry_after=False): - """ Is this method/status code retryable? (Based on method/codes whitelists) + def _is_method_retryable(self, method): + """ Checks if a given HTTP method should be retried upon, depending if + it is included on the method whitelist. """ if self.method_whitelist and method.upper() not in self.method_whitelist: return False + return True + + def is_retry(self, method, status_code, has_retry_after=False): + """ Is this method/status code retryable? (Based on whitelists and control + variables such as the number of total retries to allow, whether to + respect the Retry-After header, whether this header is present, and + whether the returned status code is on the list of status codes to + be retried upon on the presence of the aforementioned header) + """ + if not self._is_method_retryable(method): + return False + if self.status_forcelist and status_code in self.status_forcelist: return True @@ -287,7 +309,7 @@ def is_exhausted(self): """ Are we out of retries? """ - retry_counts = (self.total, self.connect, self.read, self.redirect) + retry_counts = (self.total, self.connect, self.read, self.redirect, self.status) retry_counts = list(filter(None, retry_counts)) if not retry_counts: return False @@ -317,6 +339,7 @@ connect = self.connect read = self.read redirect = self.redirect + status_count = self.status cause = 'unknown' status = None redirect_location = None @@ -330,7 +353,7 @@ elif error and self._is_read_error(error): # Read retry? - if read is False: + if read is False or not self._is_method_retryable(method): raise six.reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 @@ -348,6 +371,8 @@ # status_forcelist and a the given method is in the whitelist cause = ResponseError.GENERIC_ERROR if response and response.status: + if status_count is not None: + status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format( status_code=response.status) status = response.status @@ -356,7 +381,7 @@ new_retry = self.new( total=total, - connect=connect, read=read, redirect=redirect, + connect=connect, read=read, redirect=redirect, status=status_count, history=history) if new_retry.is_exhausted(): @@ -368,7 +393,7 @@ def __repr__(self): return ('{cls.__name__}(total={self.total}, connect={self.connect}, ' - 'read={self.read}, redirect={self.redirect})').format( + 'read={self.read}, redirect={self.redirect}, status={self.status})').format( cls=type(self), self=self) diff -Nru python-urllib3-1.19.1/urllib3/util/selectors.py python-urllib3-1.21.1/urllib3/util/selectors.py --- python-urllib3-1.19.1/urllib3/util/selectors.py 1970-01-01 00:00:00.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/util/selectors.py 2017-04-25 11:10:19.000000000 +0000 @@ -0,0 +1,581 @@ +# Backport of selectors.py from Python 3.5+ to support Python < 3.4 +# Also has the behavior specified in PEP 475 which is to retry syscalls +# in the case of an EINTR error. This module is required because selectors34 +# does not follow this behavior and instead returns that no dile descriptor +# events have occurred rather than retry the syscall. The decision to drop +# support for select.devpoll is made to maintain 100% test coverage. + +import errno +import math +import select +import socket +import sys +import time +from collections import namedtuple, Mapping + +try: + monotonic = time.monotonic +except (AttributeError, ImportError): # Python 3.3< + monotonic = time.time + +EVENT_READ = (1 << 0) +EVENT_WRITE = (1 << 1) + +HAS_SELECT = True # Variable that shows whether the platform has a selector. +_SYSCALL_SENTINEL = object() # Sentinel in case a system call returns None. +_DEFAULT_SELECTOR = None + + +class SelectorError(Exception): + def __init__(self, errcode): + super(SelectorError, self).__init__() + self.errno = errcode + + def __repr__(self): + return "".format(self.errno) + + def __str__(self): + return self.__repr__() + + +def _fileobj_to_fd(fileobj): + """ Return a file descriptor from a file object. If + given an integer will simply return that integer back. """ + if isinstance(fileobj, int): + fd = fileobj + else: + try: + fd = int(fileobj.fileno()) + except (AttributeError, TypeError, ValueError): + raise ValueError("Invalid file object: {0!r}".format(fileobj)) + if fd < 0: + raise ValueError("Invalid file descriptor: {0}".format(fd)) + return fd + + +# Determine which function to use to wrap system calls because Python 3.5+ +# already handles the case when system calls are interrupted. +if sys.version_info >= (3, 5): + def _syscall_wrapper(func, _, *args, **kwargs): + """ This is the short-circuit version of the below logic + because in Python 3.5+ all system calls automatically restart + and recalculate their timeouts. """ + try: + return func(*args, **kwargs) + except (OSError, IOError, select.error) as e: + errcode = None + if hasattr(e, "errno"): + errcode = e.errno + raise SelectorError(errcode) +else: + def _syscall_wrapper(func, recalc_timeout, *args, **kwargs): + """ Wrapper function for syscalls that could fail due to EINTR. + All functions should be retried if there is time left in the timeout + in accordance with PEP 475. """ + timeout = kwargs.get("timeout", None) + if timeout is None: + expires = None + recalc_timeout = False + else: + timeout = float(timeout) + if timeout < 0.0: # Timeout less than 0 treated as no timeout. + expires = None + else: + expires = monotonic() + timeout + + args = list(args) + if recalc_timeout and "timeout" not in kwargs: + raise ValueError( + "Timeout must be in args or kwargs to be recalculated") + + result = _SYSCALL_SENTINEL + while result is _SYSCALL_SENTINEL: + try: + result = func(*args, **kwargs) + # OSError is thrown by select.select + # IOError is thrown by select.epoll.poll + # select.error is thrown by select.poll.poll + # Aren't we thankful for Python 3.x rework for exceptions? + except (OSError, IOError, select.error) as e: + # select.error wasn't a subclass of OSError in the past. + errcode = None + if hasattr(e, "errno"): + errcode = e.errno + elif hasattr(e, "args"): + errcode = e.args[0] + + # Also test for the Windows equivalent of EINTR. + is_interrupt = (errcode == errno.EINTR or (hasattr(errno, "WSAEINTR") and + errcode == errno.WSAEINTR)) + + if is_interrupt: + if expires is not None: + current_time = monotonic() + if current_time > expires: + raise OSError(errno=errno.ETIMEDOUT) + if recalc_timeout: + if "timeout" in kwargs: + kwargs["timeout"] = expires - current_time + continue + if errcode: + raise SelectorError(errcode) + else: + raise + return result + + +SelectorKey = namedtuple('SelectorKey', ['fileobj', 'fd', 'events', 'data']) + + +class _SelectorMapping(Mapping): + """ Mapping of file objects to selector keys """ + + def __init__(self, selector): + self._selector = selector + + def __len__(self): + return len(self._selector._fd_to_key) + + def __getitem__(self, fileobj): + try: + fd = self._selector._fileobj_lookup(fileobj) + return self._selector._fd_to_key[fd] + except KeyError: + raise KeyError("{0!r} is not registered.".format(fileobj)) + + def __iter__(self): + return iter(self._selector._fd_to_key) + + +class BaseSelector(object): + """ Abstract Selector class + + A selector supports registering file objects to be monitored + for specific I/O events. + + A file object is a file descriptor or any object with a + `fileno()` method. An arbitrary object can be attached to the + file object which can be used for example to store context info, + a callback, etc. + + A selector can use various implementations (select(), poll(), epoll(), + and kqueue()) depending on the platform. The 'DefaultSelector' class uses + the most efficient implementation for the current platform. + """ + def __init__(self): + # Maps file descriptors to keys. + self._fd_to_key = {} + + # Read-only mapping returned by get_map() + self._map = _SelectorMapping(self) + + def _fileobj_lookup(self, fileobj): + """ Return a file descriptor from a file object. + This wraps _fileobj_to_fd() to do an exhaustive + search in case the object is invalid but we still + have it in our map. Used by unregister() so we can + unregister an object that was previously registered + even if it is closed. It is also used by _SelectorMapping + """ + try: + return _fileobj_to_fd(fileobj) + except ValueError: + + # Search through all our mapped keys. + for key in self._fd_to_key.values(): + if key.fileobj is fileobj: + return key.fd + + # Raise ValueError after all. + raise + + def register(self, fileobj, events, data=None): + """ Register a file object for a set of events to monitor. """ + if (not events) or (events & ~(EVENT_READ | EVENT_WRITE)): + raise ValueError("Invalid events: {0!r}".format(events)) + + key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data) + + if key.fd in self._fd_to_key: + raise KeyError("{0!r} (FD {1}) is already registered" + .format(fileobj, key.fd)) + + self._fd_to_key[key.fd] = key + return key + + def unregister(self, fileobj): + """ Unregister a file object from being monitored. """ + try: + key = self._fd_to_key.pop(self._fileobj_lookup(fileobj)) + except KeyError: + raise KeyError("{0!r} is not registered".format(fileobj)) + + # Getting the fileno of a closed socket on Windows errors with EBADF. + except socket.error as e: # Platform-specific: Windows. + if e.errno != errno.EBADF: + raise + else: + for key in self._fd_to_key.values(): + if key.fileobj is fileobj: + self._fd_to_key.pop(key.fd) + break + else: + raise KeyError("{0!r} is not registered".format(fileobj)) + return key + + def modify(self, fileobj, events, data=None): + """ Change a registered file object monitored events and data. """ + # NOTE: Some subclasses optimize this operation even further. + try: + key = self._fd_to_key[self._fileobj_lookup(fileobj)] + except KeyError: + raise KeyError("{0!r} is not registered".format(fileobj)) + + if events != key.events: + self.unregister(fileobj) + key = self.register(fileobj, events, data) + + elif data != key.data: + # Use a shortcut to update the data. + key = key._replace(data=data) + self._fd_to_key[key.fd] = key + + return key + + def select(self, timeout=None): + """ Perform the actual selection until some monitored file objects + are ready or the timeout expires. """ + raise NotImplementedError() + + def close(self): + """ Close the selector. This must be called to ensure that all + underlying resources are freed. """ + self._fd_to_key.clear() + self._map = None + + def get_key(self, fileobj): + """ Return the key associated with a registered file object. """ + mapping = self.get_map() + if mapping is None: + raise RuntimeError("Selector is closed") + try: + return mapping[fileobj] + except KeyError: + raise KeyError("{0!r} is not registered".format(fileobj)) + + def get_map(self): + """ Return a mapping of file objects to selector keys """ + return self._map + + def _key_from_fd(self, fd): + """ Return the key associated to a given file descriptor + Return None if it is not found. """ + try: + return self._fd_to_key[fd] + except KeyError: + return None + + def __enter__(self): + return self + + def __exit__(self, *args): + self.close() + + +# Almost all platforms have select.select() +if hasattr(select, "select"): + class SelectSelector(BaseSelector): + """ Select-based selector. """ + def __init__(self): + super(SelectSelector, self).__init__() + self._readers = set() + self._writers = set() + + def register(self, fileobj, events, data=None): + key = super(SelectSelector, self).register(fileobj, events, data) + if events & EVENT_READ: + self._readers.add(key.fd) + if events & EVENT_WRITE: + self._writers.add(key.fd) + return key + + def unregister(self, fileobj): + key = super(SelectSelector, self).unregister(fileobj) + self._readers.discard(key.fd) + self._writers.discard(key.fd) + return key + + def _select(self, r, w, timeout=None): + """ Wrapper for select.select because timeout is a positional arg """ + return select.select(r, w, [], timeout) + + def select(self, timeout=None): + # Selecting on empty lists on Windows errors out. + if not len(self._readers) and not len(self._writers): + return [] + + timeout = None if timeout is None else max(timeout, 0.0) + ready = [] + r, w, _ = _syscall_wrapper(self._select, True, self._readers, + self._writers, timeout) + r = set(r) + w = set(w) + for fd in r | w: + events = 0 + if fd in r: + events |= EVENT_READ + if fd in w: + events |= EVENT_WRITE + + key = self._key_from_fd(fd) + if key: + ready.append((key, events & key.events)) + return ready + + +if hasattr(select, "poll"): + class PollSelector(BaseSelector): + """ Poll-based selector """ + def __init__(self): + super(PollSelector, self).__init__() + self._poll = select.poll() + + def register(self, fileobj, events, data=None): + key = super(PollSelector, self).register(fileobj, events, data) + event_mask = 0 + if events & EVENT_READ: + event_mask |= select.POLLIN + if events & EVENT_WRITE: + event_mask |= select.POLLOUT + self._poll.register(key.fd, event_mask) + return key + + def unregister(self, fileobj): + key = super(PollSelector, self).unregister(fileobj) + self._poll.unregister(key.fd) + return key + + def _wrap_poll(self, timeout=None): + """ Wrapper function for select.poll.poll() so that + _syscall_wrapper can work with only seconds. """ + if timeout is not None: + if timeout <= 0: + timeout = 0 + else: + # select.poll.poll() has a resolution of 1 millisecond, + # round away from zero to wait *at least* timeout seconds. + timeout = math.ceil(timeout * 1e3) + + result = self._poll.poll(timeout) + return result + + def select(self, timeout=None): + ready = [] + fd_events = _syscall_wrapper(self._wrap_poll, True, timeout=timeout) + for fd, event_mask in fd_events: + events = 0 + if event_mask & ~select.POLLIN: + events |= EVENT_WRITE + if event_mask & ~select.POLLOUT: + events |= EVENT_READ + + key = self._key_from_fd(fd) + if key: + ready.append((key, events & key.events)) + + return ready + + +if hasattr(select, "epoll"): + class EpollSelector(BaseSelector): + """ Epoll-based selector """ + def __init__(self): + super(EpollSelector, self).__init__() + self._epoll = select.epoll() + + def fileno(self): + return self._epoll.fileno() + + def register(self, fileobj, events, data=None): + key = super(EpollSelector, self).register(fileobj, events, data) + events_mask = 0 + if events & EVENT_READ: + events_mask |= select.EPOLLIN + if events & EVENT_WRITE: + events_mask |= select.EPOLLOUT + _syscall_wrapper(self._epoll.register, False, key.fd, events_mask) + return key + + def unregister(self, fileobj): + key = super(EpollSelector, self).unregister(fileobj) + try: + _syscall_wrapper(self._epoll.unregister, False, key.fd) + except SelectorError: + # This can occur when the fd was closed since registry. + pass + return key + + def select(self, timeout=None): + if timeout is not None: + if timeout <= 0: + timeout = 0.0 + else: + # select.epoll.poll() has a resolution of 1 millisecond + # but luckily takes seconds so we don't need a wrapper + # like PollSelector. Just for better rounding. + timeout = math.ceil(timeout * 1e3) * 1e-3 + timeout = float(timeout) + else: + timeout = -1.0 # epoll.poll() must have a float. + + # We always want at least 1 to ensure that select can be called + # with no file descriptors registered. Otherwise will fail. + max_events = max(len(self._fd_to_key), 1) + + ready = [] + fd_events = _syscall_wrapper(self._epoll.poll, True, + timeout=timeout, + maxevents=max_events) + for fd, event_mask in fd_events: + events = 0 + if event_mask & ~select.EPOLLIN: + events |= EVENT_WRITE + if event_mask & ~select.EPOLLOUT: + events |= EVENT_READ + + key = self._key_from_fd(fd) + if key: + ready.append((key, events & key.events)) + return ready + + def close(self): + self._epoll.close() + super(EpollSelector, self).close() + + +if hasattr(select, "kqueue"): + class KqueueSelector(BaseSelector): + """ Kqueue / Kevent-based selector """ + def __init__(self): + super(KqueueSelector, self).__init__() + self._kqueue = select.kqueue() + + def fileno(self): + return self._kqueue.fileno() + + def register(self, fileobj, events, data=None): + key = super(KqueueSelector, self).register(fileobj, events, data) + if events & EVENT_READ: + kevent = select.kevent(key.fd, + select.KQ_FILTER_READ, + select.KQ_EV_ADD) + + _syscall_wrapper(self._kqueue.control, False, [kevent], 0, 0) + + if events & EVENT_WRITE: + kevent = select.kevent(key.fd, + select.KQ_FILTER_WRITE, + select.KQ_EV_ADD) + + _syscall_wrapper(self._kqueue.control, False, [kevent], 0, 0) + + return key + + def unregister(self, fileobj): + key = super(KqueueSelector, self).unregister(fileobj) + if key.events & EVENT_READ: + kevent = select.kevent(key.fd, + select.KQ_FILTER_READ, + select.KQ_EV_DELETE) + try: + _syscall_wrapper(self._kqueue.control, False, [kevent], 0, 0) + except SelectorError: + pass + if key.events & EVENT_WRITE: + kevent = select.kevent(key.fd, + select.KQ_FILTER_WRITE, + select.KQ_EV_DELETE) + try: + _syscall_wrapper(self._kqueue.control, False, [kevent], 0, 0) + except SelectorError: + pass + + return key + + def select(self, timeout=None): + if timeout is not None: + timeout = max(timeout, 0) + + max_events = len(self._fd_to_key) * 2 + ready_fds = {} + + kevent_list = _syscall_wrapper(self._kqueue.control, True, + None, max_events, timeout) + + for kevent in kevent_list: + fd = kevent.ident + event_mask = kevent.filter + events = 0 + if event_mask == select.KQ_FILTER_READ: + events |= EVENT_READ + if event_mask == select.KQ_FILTER_WRITE: + events |= EVENT_WRITE + + key = self._key_from_fd(fd) + if key: + if key.fd not in ready_fds: + ready_fds[key.fd] = (key, events & key.events) + else: + old_events = ready_fds[key.fd][1] + ready_fds[key.fd] = (key, (events | old_events) & key.events) + + return list(ready_fds.values()) + + def close(self): + self._kqueue.close() + super(KqueueSelector, self).close() + + +if not hasattr(select, 'select'): # Platform-specific: AppEngine + HAS_SELECT = False + + +def _can_allocate(struct): + """ Checks that select structs can be allocated by the underlying + operating system, not just advertised by the select module. We don't + check select() because we'll be hopeful that most platforms that + don't have it available will not advertise it. (ie: GAE) """ + try: + # select.poll() objects won't fail until used. + if struct == 'poll': + p = select.poll() + p.poll(0) + + # All others will fail on allocation. + else: + getattr(select, struct)().close() + return True + except (OSError, AttributeError) as e: + return False + + +# Choose the best implementation, roughly: +# kqueue == epoll > poll > select. Devpoll not supported. (See above) +# select() also can't accept a FD > FD_SETSIZE (usually around 1024) +def DefaultSelector(): + """ This function serves as a first call for DefaultSelector to + detect if the select module is being monkey-patched incorrectly + by eventlet, greenlet, and preserve proper behavior. """ + global _DEFAULT_SELECTOR + if _DEFAULT_SELECTOR is None: + if _can_allocate('kqueue'): + _DEFAULT_SELECTOR = KqueueSelector + elif _can_allocate('epoll'): + _DEFAULT_SELECTOR = EpollSelector + elif _can_allocate('poll'): + _DEFAULT_SELECTOR = PollSelector + elif hasattr(select, 'select'): + _DEFAULT_SELECTOR = SelectSelector + else: # Platform-specific: AppEngine + raise ValueError('Platform does not have a selector') + return _DEFAULT_SELECTOR() diff -Nru python-urllib3-1.19.1/urllib3/util/ssl_.py python-urllib3-1.21.1/urllib3/util/ssl_.py --- python-urllib3-1.19.1/urllib3/util/ssl_.py 2016-10-20 13:16:56.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/util/ssl_.py 2017-05-02 09:08:45.000000000 +0000 @@ -12,6 +12,7 @@ SSLContext = None HAS_SNI = False IS_PYOPENSSL = False +IS_SECURETRANSPORT = False # Maps the length of a digest to a possible hash function producing this digest HASHFUNC_MAP = { diff -Nru python-urllib3-1.19.1/urllib3/util/timeout.py python-urllib3-1.21.1/urllib3/util/timeout.py --- python-urllib3-1.19.1/urllib3/util/timeout.py 2016-11-16 09:43:02.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/util/timeout.py 2017-01-19 09:51:46.000000000 +0000 @@ -11,11 +11,8 @@ _Default = object() -def current_time(): - """ - Retrieve the current time. This function is mocked out in unit testing. - """ - return time.time() +# Use time.monotonic if available. +current_time = getattr(time, "monotonic", time.time) class Timeout(object): diff -Nru python-urllib3-1.19.1/urllib3/util/url.py python-urllib3-1.21.1/urllib3/util/url.py --- python-urllib3-1.19.1/urllib3/util/url.py 2016-11-03 15:16:16.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/util/url.py 2017-04-25 11:10:19.000000000 +0000 @@ -6,6 +6,10 @@ url_attrs = ['scheme', 'auth', 'host', 'port', 'path', 'query', 'fragment'] +# We only want to normalize urls with an HTTP(S) scheme. +# urllib3 infers URLs without a scheme (None) to be http. +NORMALIZABLE_SCHEMES = ('http', 'https', None) + class Url(namedtuple('Url', url_attrs)): """ @@ -21,7 +25,7 @@ path = '/' + path if scheme: scheme = scheme.lower() - if host: + if host and scheme in NORMALIZABLE_SCHEMES: host = host.lower() return super(Url, cls).__new__(cls, scheme, auth, host, port, path, query, fragment) diff -Nru python-urllib3-1.19.1/urllib3/util/wait.py python-urllib3-1.21.1/urllib3/util/wait.py --- python-urllib3-1.19.1/urllib3/util/wait.py 1970-01-01 00:00:00.000000000 +0000 +++ python-urllib3-1.21.1/urllib3/util/wait.py 2017-01-19 09:51:46.000000000 +0000 @@ -0,0 +1,40 @@ +from .selectors import ( + HAS_SELECT, + DefaultSelector, + EVENT_READ, + EVENT_WRITE +) + + +def _wait_for_io_events(socks, events, timeout=None): + """ Waits for IO events to be available from a list of sockets + or optionally a single socket if passed in. Returns a list of + sockets that can be interacted with immediately. """ + if not HAS_SELECT: + raise ValueError('Platform does not have a selector') + if not isinstance(socks, list): + # Probably just a single socket. + if hasattr(socks, "fileno"): + socks = [socks] + # Otherwise it might be a non-list iterable. + else: + socks = list(socks) + with DefaultSelector() as selector: + for sock in socks: + selector.register(sock, events) + return [key[0].fileobj for key in + selector.select(timeout) if key[1] & events] + + +def wait_for_read(socks, timeout=None): + """ Waits for reading to be available from a list of sockets + or optionally a single socket if passed in. Returns a list of + sockets that can be read from immediately. """ + return _wait_for_io_events(socks, EVENT_READ, timeout) + + +def wait_for_write(socks, timeout=None): + """ Waits for writing to be available from a list of sockets + or optionally a single socket if passed in. Returns a list of + sockets that can be written to immediately. """ + return _wait_for_io_events(socks, EVENT_WRITE, timeout) diff -Nru python-urllib3-1.19.1/urllib3.egg-info/PKG-INFO python-urllib3-1.21.1/urllib3.egg-info/PKG-INFO --- python-urllib3-1.19.1/urllib3.egg-info/PKG-INFO 2016-11-16 10:12:32.000000000 +0000 +++ python-urllib3-1.21.1/urllib3.egg-info/PKG-INFO 2017-05-02 10:57:45.000000000 +0000 @@ -1,6 +1,6 @@ Metadata-Version: 1.1 Name: urllib3 -Version: 1.19.1 +Version: 1.21.1 Summary: HTTP library with thread-safe connection pooling, file post, and more. Home-page: https://urllib3.readthedocs.io/ Author: Andrey Petrov @@ -9,13 +9,21 @@ Description: urllib3 ======= - .. image:: https://travis-ci.org/shazow/urllib3.png?branch=master + .. image:: https://travis-ci.org/shazow/urllib3.svg?branch=master :alt: Build status on Travis :target: https://travis-ci.org/shazow/urllib3 + .. image:: https://img.shields.io/appveyor/ci/shazow/urllib3/master.svg + :alt: Build status on AppVeyor + :target: https://ci.appveyor.com/project/shazow/urllib3 + .. image:: https://readthedocs.org/projects/urllib3/badge/?version=latest :alt: Documentation Status :target: https://urllib3.readthedocs.io/en/latest/ + + .. image:: https://img.shields.io/codecov/c/github/shazow/urllib3.svg + :alt: Coverage Status + :target: https://codecov.io/gh/shazow/urllib3 .. image:: https://img.shields.io/pypi/v/urllib3.svg?maxAge=86400 :alt: PyPI version @@ -94,11 +102,96 @@ Changes ======= + 1.21.1 (2017-05-02) + ------------------- + + * Fixed SecureTransport issue that would cause long delays in response body + delivery. (Pull #1154) + + * Fixed regression in 1.21 that threw exceptions when users passed the + ``socket_options`` flag to the ``PoolManager``. (Issue #1165) + + * Fixed regression in 1.21 that threw exceptions when users passed the + ``assert_hostname`` or ``assert_fingerprint`` flag to the ``PoolManager``. + (Pull #1157) + + + 1.21 (2017-04-25) + ----------------- + + * Improved performance of certain selector system calls on Python 3.5 and + later. (Pull #1095) + + * Resolved issue where the PyOpenSSL backend would not wrap SysCallError + exceptions appropriately when sending data. (Pull #1125) + + * Selectors now detects a monkey-patched select module after import for modules + that patch the select module like eventlet, greenlet. (Pull #1128) + + * Reduced memory consumption when streaming zlib-compressed responses + (as opposed to raw deflate streams). (Pull #1129) + + * Connection pools now use the entire request context when constructing the + pool key. (Pull #1016) + + * ``PoolManager.connection_from_*`` methods now accept a new keyword argument, + ``pool_kwargs``, which are merged with the existing ``connection_pool_kw``. + (Pull #1016) + + * Add retry counter for ``status_forcelist``. (Issue #1147) + + * Added ``contrib`` module for using SecureTransport on macOS: + ``urllib3.contrib.securetransport``. (Pull #1122) + + * urllib3 now only normalizes the case of ``http://`` and ``https://`` schemes: + for schemes it does not recognise, it assumes they are case-sensitive and + leaves them unchanged. + (Issue #1080) + + + 1.20 (2017-01-19) + ----------------- + + * Added support for waiting for I/O using selectors other than select, + improving urllib3's behaviour with large numbers of concurrent connections. + (Pull #1001) + + * Updated the date for the system clock check. (Issue #1005) + + * ConnectionPools now correctly consider hostnames to be case-insensitive. + (Issue #1032) + + * Outdated versions of PyOpenSSL now cause the PyOpenSSL contrib module + to fail when it is injected, rather than at first use. (Pull #1063) + + * Outdated versions of cryptography now cause the PyOpenSSL contrib module + to fail when it is injected, rather than at first use. (Issue #1044) + + * Automatically attempt to rewind a file-like body object when a request is + retried or redirected. (Pull #1039) + + * Fix some bugs that occur when modules incautiously patch the queue module. + (Pull #1061) + + * Prevent retries from occuring on read timeouts for which the request method + was not in the method whitelist. (Issue #1059) + + * Changed the PyOpenSSL contrib module to lazily load idna to avoid + unnecessarily bloating the memory of programs that don't need it. (Pull + #1076) + + * Add support for IPv6 literals with zone identifiers. (Pull #1013) + + * Added support for socks5h:// and socks4a:// schemes when working with SOCKS + proxies, and controlled remote DNS appropriately. (Issue #1035) + + 1.19.1 (2016-11-16) ------------------- * Fixed AppEngine import that didn't function on Python 3.5. (Pull #1025) + 1.19 (2016-11-03) ----------------- diff -Nru python-urllib3-1.19.1/urllib3.egg-info/SOURCES.txt python-urllib3-1.21.1/urllib3.egg-info/SOURCES.txt --- python-urllib3-1.19.1/urllib3.egg-info/SOURCES.txt 2016-11-16 10:12:35.000000000 +0000 +++ python-urllib3-1.21.1/urllib3.egg-info/SOURCES.txt 2017-05-02 10:57:45.000000000 +0000 @@ -19,6 +19,7 @@ docs/reference/index.rst docs/reference/urllib3.contrib.rst docs/reference/urllib3.util.rst +dummyserver/.DS_Store dummyserver/__init__.py dummyserver/handlers.py dummyserver/proxy.py @@ -32,6 +33,7 @@ dummyserver/certs/client.key dummyserver/certs/client.pem dummyserver/certs/client_bad.pem +dummyserver/certs/server.combined.pem dummyserver/certs/server.crt dummyserver/certs/server.csr dummyserver/certs/server.ip_san.crt @@ -47,6 +49,7 @@ test/__init__.py test/benchmark.py test/port_helpers.py +test/socketpair_helper.py test/test_collections.py test/test_compatibility.py test/test_connection.py @@ -57,8 +60,10 @@ test/test_no_ssl.py test/test_poolmanager.py test/test_proxymanager.py +test/test_queue_monkeypatch.py test/test_response.py test/test_retry.py +test/test_selectors.py test/test_util.py test/appengine/__init__.py test/appengine/app.yaml @@ -68,6 +73,8 @@ test/contrib/__init__.py test/contrib/test_gae_manager.py test/contrib/test_pyopenssl.py +test/contrib/test_pyopenssl_dependencies.py +test/contrib/test_securetransport.py test/contrib/test_socks.py test/with_dummyserver/__init__.py test/with_dummyserver/test_chunked_transfer.py @@ -96,7 +103,11 @@ urllib3/contrib/appengine.py urllib3/contrib/ntlmpool.py urllib3/contrib/pyopenssl.py +urllib3/contrib/securetransport.py urllib3/contrib/socks.py +urllib3/contrib/_securetransport/__init__.py +urllib3/contrib/_securetransport/bindings.py +urllib3/contrib/_securetransport/low_level.py urllib3/packages/__init__.py urllib3/packages/ordered_dict.py urllib3/packages/six.py @@ -109,6 +120,8 @@ urllib3/util/request.py urllib3/util/response.py urllib3/util/retry.py +urllib3/util/selectors.py urllib3/util/ssl_.py urllib3/util/timeout.py -urllib3/util/url.py \ No newline at end of file +urllib3/util/url.py +urllib3/util/wait.py \ No newline at end of file