diff -Nru cython-0.26.1/2to3-fixers.txt cython-0.29.14/2to3-fixers.txt
--- cython-0.26.1/2to3-fixers.txt 2015-09-10 16:25:36.000000000 +0000
+++ cython-0.29.14/2to3-fixers.txt 1970-01-01 00:00:00.000000000 +0000
@@ -1 +0,0 @@
-lib2to3.fixes.fix_unicode
diff -Nru cython-0.26.1/bin/cython_freeze cython-0.29.14/bin/cython_freeze
--- cython-0.26.1/bin/cython_freeze 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/bin/cython_freeze 2018-09-22 14:18:56.000000000 +0000
@@ -3,7 +3,7 @@
Create a C file for embedding one or more Cython source files.
Requires Cython 0.11.2 (or perhaps newer).
-See Demos/freeze/README.txt for more details.
+See Demos/freeze/README.rst for more details.
"""
from __future__ import print_function
diff -Nru cython-0.26.1/CHANGES.rst cython-0.29.14/CHANGES.rst
--- cython-0.26.1/CHANGES.rst 2017-08-29 06:15:21.000000000 +0000
+++ cython-0.29.14/CHANGES.rst 2019-11-01 14:13:39.000000000 +0000
@@ -2,6 +2,1021 @@
Cython Changelog
================
+0.29.14 (2019-11-01)
+====================
+
+Bugs fixed
+----------
+
+* The generated code failed to initialise the ``tp_print`` slot in CPython 3.8.
+ Patches by Pablo Galindo and Orivej Desh (Github issues #3171, #3201).
+
+* ``?`` for ``bool`` was missing from the supported NumPy dtypes.
+ Patch by Max Klein. (Github issue #2675)
+
+* ``await`` was not allowed inside of f-strings.
+ Patch by Dmitro Getz. (Github issue #2877)
+
+* Coverage analysis failed for projects where the code resides in separate
+ source sub-directories.
+ Patch by Antonio Valentino. (Github issue #1985)
+
+* An incorrect compiler warning was fixed in automatic C++ string conversions.
+ Patch by Gerion Entrup. (Github issue #3108)
+
+* Error reports in the Jupyter notebook showed unhelpful stack traces.
+ Patch by Matthew Edwards (Github issue #3196).
+
+* ``Python.h`` is now also included explicitly from ``public`` header files.
+ (Github issue #3133).
+
+* Distutils builds with ``--parallel`` did not work when using Cython's
+ deprecated ``build_ext`` command.
+ Patch by Alphadelta14 (Github issue #3187).
+
+Other changes
+-------------
+
+* The ``PyMemoryView_*()`` C-API is available in ``cpython.memoryview``.
+ Patch by Nathan Manville. (Github issue #2541)
+
+
+0.29.13 (2019-07-26)
+====================
+
+Bugs fixed
+----------
+
+* A reference leak for ``None`` was fixed when converting a memoryview
+ to a Python object. (Github issue #3023)
+
+* The declaration of ``PyGILState_STATE`` in ``cpython.pystate`` was unusable.
+ Patch by Kirill Smelkov. (Github issue #2997)
+
+
+Other changes
+-------------
+
+* The declarations in ``posix.mman`` were extended.
+ Patches by Kirill Smelkov. (Github issues #2893, #2894, #3012)
+
+
+0.29.12 (2019-07-07)
+====================
+
+Bugs fixed
+----------
+
+* Fix compile error in CPython 3.8b2 regarding the ``PyCode_New()`` signature.
+ (Github issue #3031)
+
+* Fix a C compiler warning about a missing ``int`` downcast.
+ (Github issue #3028)
+
+* Fix reported error positions of undefined builtins and constants.
+ Patch by Orivej Desh. (Github issue #3030)
+
+* A 32 bit issue in the Pythran support was resolved.
+ Patch by Serge Guelton. (Github issue #3032)
+
+
+0.29.11 (2019-06-30)
+====================
+
+Bugs fixed
+----------
+
+* Fix compile error in CPython 3.8b2 regarding the ``PyCode_New()`` signature.
+ Patch by Nick Coghlan. (Github issue #3009)
+
+* Invalid C code generated for lambda functions in cdef methods.
+ Patch by Josh Tobin. (Github issue #2967)
+
+* Support slice handling in newer Pythran versions.
+ Patch by Serge Guelton. (Github issue #2989)
+
+* A reference leak in power-of-2 calculation was fixed.
+ Patch by Sebastian Berg. (Github issue #3022)
+
+* The search order for include files was changed. Previously it was
+ ``include_directories``, ``Cython/Includes``, ``sys.path``. Now it is
+ ``include_directories``, ``sys.path``, ``Cython/Includes``. This was done to
+ allow third-party ``*.pxd`` files to override the ones in Cython.
+ Original patch by Matti Picus. (Github issue #2905)
+
+* Setting ``language_level=2`` in a file did not work if ``language_level=3``
+ was enabled globally before.
+ Patch by Jeroen Demeyer. (Github issue #2791)
+
+
+0.29.10 (2019-06-02)
+====================
+
+Bugs fixed
+----------
+
+* Fix compile errors in CPython 3.8b1 due to the new "tp_vectorcall" slots.
+ (Github issue #2976)
+
+
+0.29.9 (2019-05-29)
+===================
+
+Bugs fixed
+----------
+
+* Fix a crash regression in 0.29.8 when creating code objects fails.
+
+* Remove an incorrect cast when using true-division in C++ operations.
+ (Github issue #1950)
+
+
+0.29.8 (2019-05-28)
+===================
+
+Bugs fixed
+----------
+
+* C compile errors with CPython 3.8 were resolved.
+ Patch by Marcel Plch. (Github issue #2938)
+
+* Python tuple constants that compare equal but have different item
+ types could incorrectly be merged into a single constant.
+ (Github issue #2919)
+
+* Non-ASCII characters in unprefixed strings could crash the compiler when
+ used with language level ``3str``.
+
+* Starred expressions in %-formatting tuples could fail to compile for
+ unicode strings. (Github issue #2939)
+
+* Passing Python class references through ``cython.inline()`` was broken.
+ (Github issue #2936)
+
+
+0.29.7 (2019-04-14)
+===================
+
+Bugs fixed
+----------
+
+* Crash when the shared Cython config module gets unloaded and another Cython
+ module reports an exceptions. Cython now makes sure it keeps an owned reference
+ to the module.
+ (Github issue #2885)
+
+* Resolved a C89 compilation problem when enabling the fast-gil sharing feature.
+
+* Coverage reporting did not include the signature line of ``cdef`` functions.
+ (Github issue #1461)
+
+* Casting a GIL-requiring function into a nogil function now issues a warning.
+ (Github issue #2879)
+
+* Generators and coroutines were missing their return type annotation.
+ (Github issue #2884)
+
+
+0.29.6 (2019-02-27)
+===================
+
+Bugs fixed
+----------
+
+* Fix a crash when accessing the ``__kwdefaults__`` special attribute of
+ fused functions. (Github issue #1470)
+
+* Fix the parsing of buffer format strings that contain numeric sizes, which
+ could lead to incorrect input rejections. (Github issue #2845)
+
+* Avoid a C #pragma in old gcc versions that was only added in GCC 4.6.
+ Patch by Michael Anselmi. (Github issue #2838)
+
+* Auto-encoding of Unicode strings to UTF-8 C/C++ strings failed in Python 3,
+ even though the default encoding there is UTF-8.
+ (Github issue #2819)
+
+
+0.29.5 (2019-02-09)
+===================
+
+Bugs fixed
+----------
+
+* Crash when defining a Python subclass of an extension type and repeatedly calling
+ a cpdef method on it. (Github issue #2823)
+
+* Compiler crash when ``prange()`` loops appear inside of with-statements.
+ (Github issue #2780)
+
+* Some C compiler warnings were resolved.
+ Patches by Christoph Gohlke. (Github issues #2815, #2816, #2817, #2822)
+
+* Python conversion of C++ enums failed in 0.29.
+ Patch by Orivej Desh. (Github issue #2767)
+
+
+0.29.4 (2019-02-01)
+===================
+
+Bugs fixed
+----------
+
+* Division of numeric constants by a runtime value of 0 could fail to raise a
+ ``ZeroDivisionError``. (Github issue #2820)
+
+
+0.29.3 (2019-01-19)
+===================
+
+Bugs fixed
+----------
+
+* Some C code for memoryviews was generated in a non-deterministic order.
+ Patch by Martijn van Steenbergen. (Github issue #2779)
+
+* C89 compatibility was accidentally lost since 0.28.
+ Patches by gastineau and true-pasky. (Github issues #2778, #2801)
+
+* A C compiler cast warning was resolved.
+ Patch by Michael Buesch. (Github issue #2774)
+
+* An compilation failure with complex numbers under MSVC++ was resolved.
+ (Github issue #2797)
+
+* Coverage reporting could fail when modules were moved around after the build.
+ Patch by Wenjun Si. (Github issue #2776)
+
+
+0.29.2 (2018-12-14)
+===================
+
+Bugs fixed
+----------
+
+* The code generated for deduplicated constants leaked some references.
+ (Github issue #2750)
+
+* The declaration of ``sigismember()`` in ``libc.signal`` was corrected.
+ (Github issue #2756)
+
+* Crashes in compiler and test runner were fixed.
+ (Github issue #2736, #2755)
+
+* A C compiler warning about an invalid safety check was resolved.
+ (Github issue #2731)
+
+
+0.29.1 (2018-11-24)
+===================
+
+Bugs fixed
+----------
+
+* Extensions compiled with MinGW-64 under Windows could misinterpret integer
+ objects larger than 15 bit and return incorrect results.
+ (Github issue #2670)
+
+* Cython no longer requires the source to be writable when copying its data
+ into a memory view slice.
+ Patch by Andrey Paramonov. (Github issue #2644)
+
+* Line tracing of ``try``-statements generated invalid C code.
+ (Github issue #2274)
+
+* When using the ``warn.undeclared`` directive, Cython's own code generated
+ warnings that are now fixed.
+ Patch by Nicolas Pauss. (Github issue #2685)
+
+* Cython's memoryviews no longer require strides for setting the shape field
+ but only the ``PyBUF_ND`` flag to be set.
+ Patch by John Kirkham. (Github issue #2716)
+
+* Some C compiler warnings about unused memoryview code were fixed.
+ Patch by Ho Cheuk Ting. (Github issue #2588)
+
+* A C compiler warning about implicit signed/unsigned conversion was fixed.
+ (Github issue #2729)
+
+* Assignments to C++ references returned by ``operator[]`` could fail to compile.
+ (Github issue #2671)
+
+* The power operator and the support for NumPy math functions were fixed
+ in Pythran expressions.
+ Patch by Serge Guelton. (Github issues #2702, #2709)
+
+* Signatures with memory view arguments now show the expected type
+ when embedded in docstrings.
+ Patch by Matthew Chan and Benjamin Weigel. (Github issue #2634)
+
+* Some ``from ... cimport ...`` constructs were not correctly considered
+ when searching modified dependencies in ``cythonize()`` to decide
+ whether to recompile a module.
+ Patch by Kryštof Pilnáček. (Github issue #2638)
+
+* A struct field type in the ``cpython.array`` declarations was corrected.
+ Patch by John Kirkham. (Github issue #2712)
+
+
+0.29 (2018-10-14)
+=================
+
+Features added
+--------------
+
+* PEP-489 multi-phase module initialisation has been enabled again. Module
+ reloads in other subinterpreters raise an exception to prevent corruption
+ of the static module state.
+
+* A set of ``mypy`` compatible PEP-484 declarations were added for Cython's C data
+ types to integrate with static analysers in typed Python code. They are available
+ in the ``Cython/Shadow.pyi`` module and describe the types in the special ``cython``
+ module that can be used for typing in Python code.
+ Original patch by Julian Gethmann. (Github issue #1965)
+
+* Memoryviews are supported in PEP-484/526 style type declarations.
+ (Github issue #2529)
+
+* ``@cython.nogil`` is supported as a C-function decorator in Python code.
+ (Github issue #2557)
+
+* Raising exceptions from nogil code will automatically acquire the GIL, instead
+ of requiring an explicit ``with gil`` block.
+
+* C++ functions can now be declared as potentially raising both C++ and Python
+ exceptions, so that Cython can handle both correctly.
+ (Github issue #2615)
+
+* ``cython.inline()`` supports a direct ``language_level`` keyword argument that
+ was previously only available via a directive.
+
+* A new language level name ``3str`` was added that mostly corresponds to language
+ level 3, but keeps unprefixed string literals as type 'str' in both Py2 and Py3,
+ and the builtin 'str' type unchanged. This will become the default in the next
+ Cython release and is meant to help user code a) transition more easily to this
+ new default and b) migrate to Python 3 source code semantics without making support
+ for Python 2.x difficult.
+
+* In CPython 3.6 and later, looking up globals in the module dict is almost
+ as fast as looking up C globals.
+ (Github issue #2313)
+
+* For a Python subclass of an extension type, repeated method calls to non-overridden
+ cpdef methods can avoid the attribute lookup in Py3.6+, which makes them 4x faster.
+ (Github issue #2313)
+
+* (In-)equality comparisons of objects to integer literals are faster.
+ (Github issue #2188)
+
+* Some internal and 1-argument method calls are faster.
+
+* Modules that cimport many external extension types from other Cython modules
+ execute less import requests during module initialisation.
+
+* Constant tuples and slices are deduplicated and only created once per module.
+ (Github issue #2292)
+
+* The coverage plugin considers more C file extensions such as ``.cc`` and ``.cxx``.
+ (Github issue #2266)
+
+* The ``cythonize`` command accepts compile time variable values (as set by ``DEF``)
+ through the new ``-E`` option.
+ Patch by Jerome Kieffer. (Github issue #2315)
+
+* ``pyximport`` can import from namespace packages.
+ Patch by Prakhar Goel. (Github issue #2294)
+
+* Some missing numpy and CPython C-API declarations were added.
+ Patch by John Kirkham. (Github issues #2523, #2520, #2537)
+
+* Declarations for the ``pylifecycle`` C-API functions were added in a new .pxd file
+ ``cpython.pylifecycle``.
+
+* The Pythran support was updated to work with the latest Pythran 0.8.7.
+ Original patch by Adrien Guinet. (Github issue #2600)
+
+* ``%a`` is included in the string formatting types that are optimised into f-strings.
+ In this case, it is also automatically mapped to ``%r`` in Python 2.x.
+
+* New C macro ``CYTHON_HEX_VERSION`` to access Cython's version in the same style as
+ ``PY_VERSION_HEX``.
+
+* Constants in ``libc.math`` are now declared as ``const`` to simplify their handling.
+
+* An additional ``check_size`` clause was added to the ``ctypedef class`` name
+ specification to allow suppressing warnings when importing modules with
+ backwards-compatible ``PyTypeObject`` size changes.
+ Patch by Matti Picus. (Github issue #2627)
+
+Bugs fixed
+----------
+
+* The exception handling in generators and coroutines under CPython 3.7 was adapted
+ to the newly introduced exception stack. Users of Cython 0.28 who want to support
+ Python 3.7 are encouraged to upgrade to 0.29 to avoid potentially incorrect error
+ reporting and tracebacks. (Github issue #1958)
+
+* Crash when importing a module under Stackless Python that was built for CPython.
+ Patch by Anselm Kruis. (Github issue #2534)
+
+* 2-value slicing of typed sequences failed if the start or stop index was None.
+ Patch by Christian Gibson. (Github issue #2508)
+
+* Multiplied string literals lost their factor when they are part of another
+ constant expression (e.g. 'x' * 10 + 'y' => 'xy').
+
+* String formatting with the '%' operator didn't call the special ``__rmod__()``
+ method if the right side is a string subclass that implements it.
+ (Python issue 28598)
+
+* The directive ``language_level=3`` did not apply to the first token in the
+ source file. (Github issue #2230)
+
+* Overriding cpdef methods did not work in Python subclasses with slots.
+ Note that this can have a performance impact on calls from Cython code.
+ (Github issue #1771)
+
+* Fix declarations of builtin or C types using strings in pure python mode.
+ (Github issue #2046)
+
+* Generator expressions and lambdas failed to compile in ``@cfunc`` functions.
+ (Github issue #459)
+
+* Global names with ``const`` types were not excluded from star-import assignments
+ which could lead to invalid C code.
+ (Github issue #2621)
+
+* Several internal function signatures were fixed that lead to warnings in gcc-8.
+ (Github issue #2363)
+
+* The numpy helper functions ``set_array_base()`` and ``get_array_base()``
+ were adapted to the current numpy C-API recommendations.
+ Patch by Matti Picus. (Github issue #2528)
+
+* Some NumPy related code was updated to avoid deprecated API usage.
+ Original patch by jbrockmendel. (Github issue #2559)
+
+* Several C++ STL declarations were extended and corrected.
+ Patch by Valentin Valls. (Github issue #2207)
+
+* C lines of the module init function were unconditionally not reported in
+ exception stack traces.
+ Patch by Jeroen Demeyer. (Github issue #2492)
+
+* When PEP-489 support is enabled, reloading the module overwrote any static
+ module state. It now raises an exception instead, given that reloading is
+ not actually supported.
+
+* Object-returning, C++ exception throwing functions were not checking that
+ the return value was non-null.
+ Original patch by Matt Wozniski (Github Issue #2603)
+
+* The source file encoding detection could get confused if the
+ ``c_string_encoding`` directive appeared within the first two lines.
+ (Github issue #2632)
+
+* Cython generated modules no longer emit a warning during import when the
+ size of the NumPy array type is larger than what was found at compile time.
+ Instead, this is assumed to be a backwards compatible change on NumPy side.
+
+Other changes
+-------------
+
+* Cython now emits a warning when no ``language_level`` (2, 3 or '3str') is set
+ explicitly, neither as a ``cythonize()`` option nor as a compiler directive.
+ This is meant to prepare the transition of the default language level from
+ currently Py2 to Py3, since that is what most new users will expect these days.
+ The future default will, however, not enforce unicode literals, because this
+ has proven a major obstacle in the support for both Python 2.x and 3.x. The
+ next major release is intended to make this change, so that it will parse all
+ code that does not request a specific language level as Python 3 code, but with
+ ``str`` literals. The language level 2 will continue to be supported for an
+ indefinite time.
+
+* The documentation was restructured, cleaned up and examples are now tested.
+ The NumPy tutorial was also rewritten to simplify the running example.
+ Contributed by Gabriel de Marmiesse. (Github issue #2245)
+
+* Cython compiles less of its own modules at build time to reduce the installed
+ package size to about half of its previous size. This makes the compiler
+ slightly slower, by about 5-7%.
+
+
+0.28.6 (2018-11-01)
+===================
+
+Bugs fixed
+----------
+
+* Extensions compiled with MinGW-64 under Windows could misinterpret integer
+ objects larger than 15 bit and return incorrect results.
+ (Github issue #2670)
+
+* Multiplied string literals lost their factor when they are part of another
+ constant expression (e.g. 'x' * 10 + 'y' => 'xy').
+
+
+0.28.5 (2018-08-03)
+===================
+
+Bugs fixed
+----------
+
+* The discouraged usage of GCC's attribute ``optimize("Os")`` was replaced by the
+ similar attribute ``cold`` to reduce the code impact of the module init functions.
+ (Github issue #2494)
+
+* A reference leak in Py2.x was fixed when comparing str to unicode for equality.
+
+
+0.28.4 (2018-07-08)
+===================
+
+Bugs fixed
+----------
+
+* Reallowing ``tp_clear()`` in a subtype of an ``@no_gc_clear`` extension type
+ generated an invalid C function call to the (non-existent) base type implementation.
+ (Github issue #2309)
+
+* Exception catching based on a non-literal (runtime) tuple could fail to match the
+ exception. (Github issue #2425)
+
+* Compile fix for CPython 3.7.0a2. (Github issue #2477)
+
+
+0.28.3 (2018-05-27)
+===================
+
+Bugs fixed
+----------
+
+* Set iteration was broken in non-CPython since 0.28.
+
+* ``UnicodeEncodeError`` in Py2 when ``%s`` formatting is optimised for
+ unicode strings. (Github issue #2276)
+
+* Work around a crash bug in g++ 4.4.x by disabling the size reduction setting
+ of the module init function in this version. (Github issue #2235)
+
+* Crash when exceptions occur early during module initialisation.
+ (Github issue #2199)
+
+
+0.28.2 (2018-04-13)
+===================
+
+Features added
+--------------
+
+* ``abs()`` is faster for Python long objects.
+
+* The C++11 methods ``front()`` and ``end()`` were added to the declaration of
+ ``libcpp.string``. Patch by Alex Huszagh. (Github issue #2123)
+
+* The C++11 methods ``reserve()`` and ``bucket_count()`` are declared for
+ ``libcpp.unordered_map``. Patch by Valentin Valls. (Github issue #2168)
+
+Bugs fixed
+----------
+
+* The copy of a read-only memoryview was considered read-only as well, whereas
+ a common reason to copy a read-only view is to make it writable. The result
+ of the copying is now a writable buffer by default.
+ (Github issue #2134)
+
+* The ``switch`` statement generation failed to apply recursively to the body of
+ converted if-statements.
+
+* ``NULL`` was sometimes rejected as exception return value when the returned
+ type is a fused pointer type.
+ Patch by Callie LeFave. (Github issue #2177)
+
+* Fixed compatibility with PyPy 5.11.
+ Patch by Matti Picus. (Github issue #2165)
+
+Other changes
+-------------
+
+* The NumPy tutorial was rewritten to use memoryviews instead of the older
+ buffer declaration syntax.
+ Contributed by Gabriel de Marmiesse. (Github issue #2162)
+
+
+0.28.1 (2018-03-18)
+===================
+
+Bugs fixed
+----------
+
+* ``PyFrozenSet_New()`` was accidentally used in PyPy where it is missing
+ from the C-API.
+
+* Assignment between some C++ templated types were incorrectly rejected
+ when the templates mix ``const`` with ``ctypedef``.
+ (Github issue #2148)
+
+* Undeclared C++ no-args constructors in subclasses could make the compilation
+ fail if the base class constructor was declared without ``nogil``.
+ (Github issue #2157)
+
+* Bytes %-formatting inferred ``basestring`` (bytes or unicode) as result type
+ in some cases where ``bytes`` would have been safe to infer.
+ (Github issue #2153)
+
+* ``None`` was accidentally disallowed as typed return value of ``dict.pop()``.
+ (Github issue #2152)
+
+
+0.28 (2018-03-13)
+=================
+
+Features added
+--------------
+
+* Cdef classes can now multiply inherit from ordinary Python classes.
+ (The primary base must still be a c class, possibly ``object``, and
+ the other bases must *not* be cdef classes.)
+
+* Type inference is now supported for Pythran compiled NumPy expressions.
+ Patch by Nils Braun. (Github issue #1954)
+
+* The ``const`` modifier can be applied to memoryview declarations to allow
+ read-only buffers as input. (Github issues #1605, #1869)
+
+* C code in the docstring of a ``cdef extern`` block is copied verbatimly
+ into the generated file.
+ Patch by Jeroen Demeyer. (Github issue #1915)
+
+* When compiling with gcc, the module init function is now tuned for small
+ code size instead of whatever compile flags were provided externally.
+ Cython now also disables some code intensive optimisations in that function
+ to further reduce the code size. (Github issue #2102)
+
+* Decorating an async coroutine with ``@cython.iterable_coroutine`` changes its
+ type at compile time to make it iterable. While this is not strictly in line
+ with PEP-492, it improves the interoperability with old-style coroutines that
+ use ``yield from`` instead of ``await``.
+
+* The IPython magic has preliminary support for JupyterLab.
+ (Github issue #1775)
+
+* The new TSS C-API in CPython 3.7 is supported and has been backported.
+ Patch by Naotoshi Seo. (Github issue #1932)
+
+* Cython knows the new ``Py_tss_t`` type defined in PEP-539 and automatically
+ initialises variables declared with that type to ``Py_tss_NEEDS_INIT``,
+ a value which cannot be used outside of static assignments.
+
+* The set methods ``.remove()`` and ``.discard()`` are optimised.
+ Patch by Antoine Pitrou. (Github issue #2042)
+
+* ``dict.pop()`` is optimised.
+ Original patch by Antoine Pitrou. (Github issue #2047)
+
+* Iteration over sets and frozensets is optimised.
+ (Github issue #2048)
+
+* Safe integer loops (< range(2^30)) are automatically optimised into C loops.
+
+* ``alist.extend([a,b,c])`` is optimised into sequential ``list.append()`` calls
+ for short literal sequences.
+
+* Calls to builtin methods that are not specifically optimised into C-API calls
+ now use a cache that avoids repeated lookups of the underlying C function.
+ (Github issue #2054)
+
+* Single argument function calls can avoid the argument tuple creation in some cases.
+
+* Some redundant extension type checks are avoided.
+
+* Formatting C enum values in f-strings is faster, as well as some other special cases.
+
+* String formatting with the '%' operator is optimised into f-strings in simple cases.
+
+* Subscripting (item access) is faster in some cases.
+
+* Some ``bytearray`` operations have been optimised similar to ``bytes``.
+
+* Some PEP-484/526 container type declarations are now considered for
+ loop optimisations.
+
+* Indexing into memoryview slices with ``view[i][j]`` is now optimised into
+ ``view[i, j]``.
+
+* Python compatible ``cython.*`` types can now be mixed with type declarations
+ in Cython syntax.
+
+* Name lookups in the module and in classes are faster.
+
+* Python attribute lookups on extension types without instance dict are faster.
+
+* Some missing signals were added to ``libc/signal.pxd``.
+ Patch by Jeroen Demeyer. (Github issue #1914)
+
+* The warning about repeated extern declarations is now visible by default.
+ (Github issue #1874)
+
+* The exception handling of the function types used by CPython's type slot
+ functions was corrected to match the de-facto standard behaviour, so that
+ code that uses them directly benefits from automatic and correct exception
+ propagation. Patch by Jeroen Demeyer. (Github issue #1980)
+
+* Defining the macro ``CYTHON_NO_PYINIT_EXPORT`` will prevent the module init
+ function from being exported as symbol, e.g. when linking modules statically
+ in an embedding setup. Patch by AraHaan. (Github issue #1944)
+
+Bugs fixed
+----------
+
+* If a module name is explicitly provided for an ``Extension()`` that is compiled
+ via ``cythonize()``, it was previously ignored and replaced by the source file
+ name. It can now be used to override the target module name, e.g. for compiling
+ prefixed accelerator modules from Python files. (Github issue #2038)
+
+* The arguments of the ``num_threads`` parameter of parallel sections
+ were not sufficiently validated and could lead to invalid C code.
+ (Github issue #1957)
+
+* Catching exceptions with a non-trivial exception pattern could call into
+ CPython with a live exception set. This triggered incorrect behaviour
+ and crashes, especially in CPython 3.7.
+
+* The signature of the special ``__richcmp__()`` method was corrected to recognise
+ the type of the first argument as ``self``. It was previously treated as plain
+ object, but CPython actually guarantees that it always has the correct type.
+ Note: this can change the semantics of user code that previously relied on
+ ``self`` being untyped.
+
+* Some Python 3 exceptions were not recognised as builtins when running Cython
+ under Python 2.
+
+* Some async helper functions were not defined in the generated C code when
+ compiling simple async code. (Github issue #2075)
+
+* Line tracing did not include generators and coroutines.
+ (Github issue #1949)
+
+* C++ declarations for ``unordered_map`` were corrected.
+ Patch by Michael Schatzow. (Github issue #1484)
+
+* Iterator declarations in C++ ``deque`` and ``vector`` were corrected.
+ Patch by Alex Huszagh. (Github issue #1870)
+
+* The const modifiers in the C++ ``string`` declarations were corrected, together
+ with the coercion behaviour of string literals into C++ strings.
+ (Github issue #2132)
+
+* Some declaration types in ``libc.limits`` were corrected.
+ Patch by Jeroen Demeyer. (Github issue #2016)
+
+* ``@cython.final`` was not accepted on Python classes with an ``@cython.cclass``
+ decorator. (Github issue #2040)
+
+* Cython no longer creates useless and incorrect ``PyInstanceMethod`` wrappers for
+ methods in Python 3. Patch by Jeroen Demeyer. (Github issue #2105)
+
+* The builtin ``bytearray`` type could not be used as base type of cdef classes.
+ (Github issue #2106)
+
+Other changes
+-------------
+
+
+0.27.3 (2017-11-03)
+===================
+
+Bugs fixed
+----------
+
+* String forward references to extension types like ``@cython.locals(x="ExtType")``
+ failed to find the named type. (Github issue #1962)
+
+* NumPy slicing generated incorrect results when compiled with Pythran.
+ Original patch by Serge Guelton (Github issue #1946).
+
+* Fix "undefined reference" linker error for generators on Windows in Py3.3-3.5.
+ (Github issue #1968)
+
+* Adapt to recent C-API change of ``PyThreadState`` in CPython 3.7.
+
+* Fix signature of ``PyWeakref_GetObject()`` API declaration.
+ Patch by Jeroen Demeyer (Github issue #1975).
+
+
+0.27.2 (2017-10-22)
+===================
+
+Bugs fixed
+----------
+
+* Comprehensions could incorrectly be optimised away when they appeared in boolean
+ test contexts. (Github issue #1920)
+
+* The special methods ``__eq__``, ``__lt__`` etc. in extension types did not type
+ their first argument as the type of the class but ``object``. (Github issue #1935)
+
+* Crash on first lookup of "cline_in_traceback" option during exception handling.
+ (Github issue #1907)
+
+* Some nested module level comprehensions failed to compile.
+ (Github issue #1906)
+
+* Compiler crash on some complex type declarations in pure mode.
+ (Github issue #1908)
+
+* ``std::unordered_map.erase()`` was declared with an incorrect ``void`` return
+ type in ``libcpp.unordered_map``. (Github issue #1484)
+
+* Invalid use of C++ ``fallthrough`` attribute before C++11 and similar issue in clang.
+ (Github issue #1930)
+
+* Compiler crash on misnamed properties. (Github issue #1905)
+
+
+0.27.1 (2017-10-01)
+===================
+
+Features added
+--------------
+
+* The Jupyter magic has a new debug option ``--verbose`` that shows details about
+ the distutils invocation. Patch by Boris Filippov (Github issue #1881).
+
+Bugs fixed
+----------
+
+* Py3 list comprehensions in class bodies resulted in invalid C code.
+ (Github issue #1889)
+
+* Modules built for later CPython 3.5.x versions failed to import in 3.5.0/3.5.1.
+ (Github issue #1880)
+
+* Deallocating fused types functions and methods kept their GC tracking enabled,
+ which could potentially lead to recursive deallocation attempts.
+
+* Crash when compiling in C++ mode with old setuptools versions.
+ (Github issue #1879)
+
+* C++ object arguments for the constructor of Cython implemented C++ are now
+ passed by reference and not by value to allow for non-copyable arguments, such
+ as ``unique_ptr``.
+
+* API-exported C++ classes with Python object members failed to compile.
+ (Github issue #1866)
+
+* Some issues with the new relaxed exception value handling were resolved.
+
+* Python classes as annotation types could prevent compilation.
+ (Github issue #1887)
+
+* Cython annotation types in Python files could lead to import failures
+ with a "cython undefined" error. Recognised types are now turned into strings.
+
+* Coverage analysis could fail to report on extension modules on some platforms.
+
+* Annotations could be parsed (and rejected) as types even with
+ ``annotation_typing=False``.
+
+Other changes
+-------------
+
+* PEP 489 support has been disabled by default to counter incompatibilities with
+ import setups that try to reload or reinitialise modules.
+
+
+0.27 (2017-09-23)
+=================
+
+Features added
+--------------
+
+* Extension module initialisation follows
+ `PEP 489 `_ in CPython 3.5+, which
+ resolves several differences with regard to normal Python modules. This makes
+ the global names ``__file__`` and ``__path__`` correctly available to module
+ level code and improves the support for module-level relative imports.
+ (Github issues #1715, #1753, #1035)
+
+* Asynchronous generators (`PEP 525 `_)
+ and asynchronous comprehensions (`PEP 530 `_)
+ have been implemented. Note that async generators require finalisation support
+ in order to allow for asynchronous operations during cleanup, which is only
+ available in CPython 3.6+. All other functionality has been backported as usual.
+
+* Variable annotations are now parsed according to
+ `PEP 526 `_. Cython types (e.g.
+ ``cython.int``) are evaluated as C type declarations and everything else as Python
+ types. This can be disabled with the directive ``annotation_typing=False``.
+ Note that most complex PEP-484 style annotations are currently ignored. This will
+ change in future releases. (Github issue #1850)
+
+* Extension types (also in pure Python mode) can implement the normal special methods
+ ``__eq__``, ``__lt__`` etc. for comparisons instead of the low-level ``__richcmp__``
+ method. (Github issue #690)
+
+* New decorator ``@cython.exceptval(x=None, check=False)`` that makes the signature
+ declarations ``except x``, ``except? x`` and ``except *`` available to pure Python
+ code. Original patch by Antonio Cuni. (Github issue #1653)
+
+* Signature annotations are now included in the signature docstring generated by
+ the ``embedsignature`` directive. Patch by Lisandro Dalcin (Github issue #1781).
+
+* The gdb support for Python code (``libpython.py``) was updated to the latest
+ version in CPython 3.7 (git rev 5fe59f8).
+
+* The compiler tries to find a usable exception return value for cdef functions
+ with ``except *`` if the returned type allows it. Note that this feature is subject
+ to safety limitations, so it is still better to provide an explicit declaration.
+
+* C functions can be assigned to function pointers with a compatible exception
+ declaration, not only with exact matches. A side-effect is that certain compatible
+ signature overrides are now allowed and some more mismatches of exception signatures
+ are now detected and rejected as errors that were not detected before.
+
+* The IPython/Jupyter magic integration has a new option ``%%cython --pgo`` for profile
+ guided optimisation. It compiles the cell with PGO settings for the C compiler,
+ executes it to generate a runtime profile, and then compiles it again using that
+ profile for C compiler optimisation. Currently only tested with gcc.
+
+* ``len(memoryview)`` can be used in nogil sections to get the size of the
+ first dimension of a memory view (``shape[0]``). (Github issue #1733)
+
+* C++ classes can now contain (properly refcounted) Python objects.
+
+* NumPy dtype subarrays are now accessible through the C-API.
+ Patch by Gerald Dalley (Github issue #245).
+
+* Resolves several issues with PyPy and uses faster async slots in PyPy3.
+ Patch by Ronan Lamy (Github issues #1871, #1878).
+
+Bugs fixed
+----------
+
+* Extension types that were cimported from other Cython modules could disagree
+ about the order of fused cdef methods in their call table. This could lead
+ to wrong methods being called and potentially also crashes. The fix required
+ changes to the ordering of fused methods in the call table, which may break
+ existing compiled modules that call fused cdef methods across module boundaries,
+ if these methods were implemented in a different order than they were declared
+ in the corresponding .pxd file. (Github issue #1873)
+
+* The exception state handling in generators and coroutines could lead to
+ exceptions in the caller being lost if an exception was raised and handled
+ inside of the coroutine when yielding. (Github issue #1731)
+
+* Loops over ``range(enum)`` were not converted into C for-loops. Note that it
+ is still recommended to use an explicit cast to a C integer type in this case.
+
+* Error positions of names (e.g. variables) were incorrectly reported after the
+ name and not at the beginning of the name.
+
+* Compile time ``DEF`` assignments were evaluated even when they occur inside of
+ falsy ``IF`` blocks. (Github issue #1796)
+
+* Disabling the line tracing from a trace function could fail.
+ Original patch by Dmitry Trofimov. (Github issue #1769)
+
+* Several issues with the Pythran integration were resolved.
+
+* abs(signed int) now returns a signed rather than unsigned int.
+ (Github issue #1837)
+
+* Reading ``frame.f_locals`` of a Cython function (e.g. from a debugger or profiler
+ could modify the module globals. (Github issue #1836)
+
+* Buffer type mismatches in the NumPy buffer support could leak a reference to the
+ buffer owner.
+
+* Using the "is_f_contig" and "is_c_contig" memoryview methods together could leave
+ one of them undeclared. (Github issue #1872)
+
+* Compilation failed if the for-in-range loop target was not a variable but a more
+ complex expression, e.g. an item assignment. (Github issue #1831)
+
+* Compile time evaluations of (partially) constant f-strings could show incorrect
+ results.
+
+* Escape sequences in raw f-strings (``fr'...'``) were resolved instead of passing
+ them through as expected.
+
+* Some ref-counting issues in buffer error handling have been resolved.
+
+Other changes
+-------------
+
+* Type declarations in signature annotations are now parsed according to
+ `PEP 484 `_
+ typing. Only Cython types (e.g. ``cython.int``) and Python builtin types are
+ currently considered as type declarations. Everything else is ignored, but this
+ will change in a future Cython release.
+ (Github issue #1672)
+
+* The directive ``annotation_typing`` is now ``True`` by default, which enables
+ parsing type declarations from annotations.
+
+* This release no longer supports Python 3.2.
+
0.26.1 (2017-08-29)
===================
@@ -46,6 +1061,8 @@
* Some include directories and dependencies were referenced with their absolute paths
in the generated files despite lying within the project directory.
+* Failure to compile in Py3.7 due to a modified signature of ``_PyCFunctionFast()``
+
0.26 (2017-07-19)
=================
@@ -183,7 +1200,8 @@
* The new METH_FASTCALL calling convention for PyCFunctions is supported
in CPython 3.6. See https://bugs.python.org/issue27810
-* Initial support for using Cython modules in Pyston. Patch by Daetalus.
+* Initial support for using Cython modules in Pyston.
+ Patch by Boxiang Sun.
* Dynamic Python attributes are allowed on cdef classes if an attribute
``cdef dict __dict__`` is declared in the class. Patch by empyrical.
@@ -249,7 +1267,7 @@
* IPython cell magic was lacking a good way to enable Python 3 code semantics.
It can now be used as "%%cython -3".
-* Follow a recent change in `PEP 492 `_
+* Follow a recent change in `PEP 492 `_
and CPython 3.5.2 that now requires the ``__aiter__()`` method of asynchronous
iterators to be a simple ``def`` method instead of an ``async def`` method.
@@ -279,12 +1297,12 @@
Features added
--------------
-* PEP 498: Literal String Formatting (f-strings).
+* `PEP 498 `_:
+ Literal String Formatting (f-strings).
Original patch by Jelle Zijlstra.
- https://www.python.org/dev/peps/pep-0498/
-* PEP 515: Underscores as visual separators in number literals.
- https://www.python.org/dev/peps/pep-0515/
+* `PEP 515 `_:
+ Underscores as visual separators in number literals.
* Parser was adapted to some minor syntax changes in Py3.6, e.g.
https://bugs.python.org/issue9232
@@ -467,11 +1485,11 @@
Features added
--------------
-* PEP 492 (async/await) was implemented.
- See https://www.python.org/dev/peps/pep-0492/
+* `PEP 492 `_
+ (async/await) was implemented.
-* PEP 448 (Additional Unpacking Generalizations) was implemented.
- See https://www.python.org/dev/peps/pep-0448/
+* `PEP 448 `_
+ (Additional Unpacking Generalizations) was implemented.
* Support for coverage.py 4.0+ can be enabled by adding the plugin
"Cython.Coverage" to the ".coveragerc" config file.
@@ -662,9 +1680,9 @@
* Anonymous C tuple types can be declared as (ctype1, ctype2, ...).
-* PEP 479: turn accidental StopIteration exceptions that exit generators
+* `PEP 479 `_:
+ turn accidental StopIteration exceptions that exit generators
into a RuntimeError, activated with future import "generator_stop".
- See https://www.python.org/dev/peps/pep-0479/
* Looping over ``reversed(range())`` is optimised in the same way as
``range()``. Patch by Favian Contreras.
@@ -1639,9 +2657,9 @@
* GDB support. http://docs.cython.org/src/userguide/debugging.html
-* A new build system with support for inline distutils directives, correct dependency tracking, and parallel compilation. http://wiki.cython.org/enhancements/distutils_preprocessing
+* A new build system with support for inline distutils directives, correct dependency tracking, and parallel compilation. https://github.com/cython/cython/wiki/enhancements-distutils_preprocessing
-* Support for dynamic compilation at runtime via the new cython.inline function and cython.compile decorator. http://wiki.cython.org/enhancements/inline
+* Support for dynamic compilation at runtime via the new cython.inline function and cython.compile decorator. https://github.com/cython/cython/wiki/enhancements-inline
* "nogil" blocks are supported when compiling pure Python code by writing "with cython.nogil".
diff -Nru cython-0.26.1/Cython/Build/Cythonize.py cython-0.29.14/Cython/Build/Cythonize.py
--- cython-0.26.1/Cython/Build/Cythonize.py 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Build/Cythonize.py 2018-12-14 14:27:50.000000000 +0000
@@ -21,13 +21,21 @@
class _FakePool(object):
def map_async(self, func, args):
- from itertools import imap
+ try:
+ from itertools import imap
+ except ImportError:
+ imap=map
for _ in imap(func, args):
pass
- def close(self): pass
- def terminate(self): pass
- def join(self): pass
+ def close(self):
+ pass
+
+ def terminate(self):
+ pass
+
+ def join(self):
+ pass
def parse_directives(option, name, value, parser):
@@ -52,6 +60,13 @@
setattr(parser.values, dest, options)
+def parse_compile_time_env(option, name, value, parser):
+ dest = option.dest
+ old_env = dict(getattr(parser.values, dest, {}))
+ new_env = Options.parse_compile_time_env(value, current_settings=old_env)
+ setattr(parser.values, dest, new_env)
+
+
def find_package_base(path):
base_dir, package_path = os.path.split(path)
while os.path.isfile(os.path.join(base_dir, '__init__.py')):
@@ -85,6 +100,7 @@
exclude_failures=options.keep_going,
exclude=options.excludes,
compiler_directives=options.directives,
+ compile_time_env=options.compile_time_env,
force=options.force,
quiet=options.quiet,
**options.options)
@@ -136,13 +152,23 @@
from optparse import OptionParser
parser = OptionParser(usage='%prog [options] [sources and packages]+')
- parser.add_option('-X', '--directive', metavar='NAME=VALUE,...', dest='directives',
- type=str, action='callback', callback=parse_directives, default={},
+ parser.add_option('-X', '--directive', metavar='NAME=VALUE,...',
+ dest='directives', default={}, type="str",
+ action='callback', callback=parse_directives,
help='set a compiler directive')
- parser.add_option('-s', '--option', metavar='NAME=VALUE', dest='options',
- type=str, action='callback', callback=parse_options, default={},
+ parser.add_option('-E', '--compile-time-env', metavar='NAME=VALUE,...',
+ dest='compile_time_env', default={}, type="str",
+ action='callback', callback=parse_compile_time_env,
+ help='set a compile time environment variable')
+ parser.add_option('-s', '--option', metavar='NAME=VALUE',
+ dest='options', default={}, type="str",
+ action='callback', callback=parse_options,
help='set a cythonize option')
- parser.add_option('-3', dest='python3_mode', action='store_true',
+ parser.add_option('-2', dest='language_level', action='store_const', const=2, default=None,
+ help='use Python 2 syntax mode by default')
+ parser.add_option('-3', dest='language_level', action='store_const', const=3,
+ help='use Python 3 syntax mode by default')
+ parser.add_option('--3str', dest='language_level', action='store_const', const='3str',
help='use Python 3 syntax mode by default')
parser.add_option('-a', '--annotate', dest='annotate', action='store_true',
help='generate annotated HTML page for source files')
@@ -176,8 +202,9 @@
options.build = True
if multiprocessing is None:
options.parallel = 0
- if options.python3_mode:
- options.options['language_level'] = 3
+ if options.language_level:
+ assert options.language_level in (2, 3, '3str')
+ options.options['language_level'] = options.language_level
return options, args
diff -Nru cython-0.26.1/Cython/Build/Dependencies.py cython-0.29.14/Cython/Build/Dependencies.py
--- cython-0.26.1/Cython/Build/Dependencies.py 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Build/Dependencies.py 2019-06-30 06:50:51.000000000 +0000
@@ -4,8 +4,19 @@
from .. import __version__
import collections
-import re, os, sys, time
+import contextlib
+import hashlib
+import os
+import shutil
+import subprocess
+import re, sys, time
+import warnings
from glob import iglob
+from io import open as io_open
+from os.path import relpath as _relpath
+from distutils.extension import Extension
+from distutils.util import strtobool
+import zipfile
try:
import gzip
@@ -14,47 +25,21 @@
except ImportError:
gzip_open = open
gzip_ext = ''
-import shutil
-import subprocess
-import os
-
-try:
- import hashlib
-except ImportError:
- import md5 as hashlib
-
-try:
- from io import open as io_open
-except ImportError:
- from codecs import open as io_open
try:
- from os.path import relpath as _relpath
+ import zlib
+ zipfile_compression_mode = zipfile.ZIP_DEFLATED
except ImportError:
- # Py<2.6
- def _relpath(path, start=os.path.curdir):
- if not path:
- raise ValueError("no path specified")
- start_list = os.path.abspath(start).split(os.path.sep)
- path_list = os.path.abspath(path).split(os.path.sep)
- i = len(os.path.commonprefix([start_list, path_list]))
- rel_list = [os.path.pardir] * (len(start_list)-i) + path_list[i:]
- if not rel_list:
- return os.path.curdir
- return os.path.join(*rel_list)
+ zipfile_compression_mode = zipfile.ZIP_STORED
try:
import pythran
- PythranAvailable = True
except:
- PythranAvailable = False
-
-from distutils.extension import Extension
-from distutils.util import strtobool
+ pythran = None
from .. import Utils
from ..Utils import (cached_function, cached_method, path_exists,
- safe_makedirs, copy_file_to_dir_if_newer, is_package_dir)
+ safe_makedirs, copy_file_to_dir_if_newer, is_package_dir, replace_suffix)
from ..Compiler.Main import Context, CompilationOptions, default_options
join_path = cached_function(os.path.join)
@@ -126,21 +111,42 @@
@cached_function
def file_hash(filename):
- path = os.path.normpath(filename.encode("UTF-8"))
- prefix = (str(len(path)) + ":").encode("UTF-8")
+ path = os.path.normpath(filename)
+ prefix = ('%d:%s' % (len(path), path)).encode("UTF-8")
m = hashlib.md5(prefix)
- m.update(path)
- f = open(filename, 'rb')
- try:
+ with open(path, 'rb') as f:
data = f.read(65000)
while data:
m.update(data)
data = f.read(65000)
- finally:
- f.close()
return m.hexdigest()
+def update_pythran_extension(ext):
+ if pythran is None:
+ raise RuntimeError("You first need to install Pythran to use the np_pythran directive.")
+ try:
+ pythran_ext = pythran.config.make_extension(python=True)
+ except TypeError: # older pythran version only
+ pythran_ext = pythran.config.make_extension()
+
+ ext.include_dirs.extend(pythran_ext['include_dirs'])
+ ext.extra_compile_args.extend(pythran_ext['extra_compile_args'])
+ ext.extra_link_args.extend(pythran_ext['extra_link_args'])
+ ext.define_macros.extend(pythran_ext['define_macros'])
+ ext.undef_macros.extend(pythran_ext['undef_macros'])
+ ext.library_dirs.extend(pythran_ext['library_dirs'])
+ ext.libraries.extend(pythran_ext['libraries'])
+ ext.language = 'c++'
+
+ # These options are not compatible with the way normal Cython extensions work
+ for bad_option in ["-fwhole-program", "-fvisibility=hidden"]:
+ try:
+ ext.extra_compile_args.remove(bad_option)
+ except ValueError:
+ pass
+
+
def parse_list(s):
"""
>>> parse_list("")
@@ -223,7 +229,7 @@
break
line = line[1:].lstrip()
kind = next((k for k in ("distutils:","cython:") if line.startswith(k)), None)
- if not kind is None:
+ if kind is not None:
key, _, value = [s.strip() for s in line[len(kind):].partition('=')]
type = distutils_settings.get(key, None)
if line.startswith("cython:") and type is None: continue
@@ -391,6 +397,10 @@
r"(?:^\s*cimport +([0-9a-zA-Z_.]+(?: *, *[0-9a-zA-Z_.]+)*))|"
r"(?:^\s*cdef +extern +from +['\"]([^'\"]+)['\"])|"
r"(?:^\s*include +['\"]([^'\"]+)['\"])", re.M)
+dependency_after_from_regex = re.compile(
+ r"(?:^\s+\(([0-9a-zA-Z_., ]*)\)[#\n])|"
+ r"(?:^\s+([0-9a-zA-Z_., ]*)[#\n])",
+ re.M)
def normalize_existing(base_path, rel_paths):
@@ -466,11 +476,8 @@
# Actual parsing is way too slow, so we use regular expressions.
# The only catch is that we must strip comments and string
# literals ahead of time.
- fh = Utils.open_source_file(source_filename, error_handling='ignore')
- try:
+ with Utils.open_source_file(source_filename, error_handling='ignore') as fh:
source = fh.read()
- finally:
- fh.close()
distutils_info = DistutilsInfo(source)
source, literals = strip_string_literals(source)
source = source.replace('\\\n', ' ').replace('\t', ' ')
@@ -483,6 +490,13 @@
cimport_from, cimport_list, extern, include = m.groups()
if cimport_from:
cimports.append(cimport_from)
+ m_after_from = dependency_after_from_regex.search(source, pos=m.end())
+ if m_after_from:
+ multiline, one_line = m_after_from.groups()
+ subimports = multiline or one_line
+ cimports.extend("{0}.{1}".format(cimport_from, s.strip())
+ for s in subimports.split(','))
+
elif cimport_list:
cimports.extend(x.strip() for x in cimport_list.split(","))
elif extern:
@@ -579,14 +593,14 @@
pxd_list = [filename[:-4] + '.pxd']
else:
pxd_list = []
+ # Cimports generates all possible combinations package.module
+ # when imported as from package cimport module.
for module in self.cimports(filename):
if module[:7] == 'cython.' or module == 'cython':
continue
pxd_file = self.find_pxd(module, filename)
if pxd_file is not None:
pxd_list.append(pxd_file)
- elif not self.quiet:
- print("%s: cannot find cimported module '%s'" % (filename, module))
return tuple(pxd_list)
@cached_method
@@ -609,15 +623,32 @@
def newest_dependency(self, filename):
return max([self.extract_timestamp(f) for f in self.all_dependencies(filename)])
- def transitive_fingerprint(self, filename, extra=None):
+ def transitive_fingerprint(self, filename, module, compilation_options):
+ r"""
+ Return a fingerprint of a cython file that is about to be cythonized.
+
+ Fingerprints are looked up in future compilations. If the fingerprint
+ is found, the cythonization can be skipped. The fingerprint must
+ incorporate everything that has an influence on the generated code.
+ """
try:
m = hashlib.md5(__version__.encode('UTF-8'))
m.update(file_hash(filename).encode('UTF-8'))
for x in sorted(self.all_dependencies(filename)):
if os.path.splitext(x)[1] not in ('.c', '.cpp', '.h'):
m.update(file_hash(x).encode('UTF-8'))
- if extra is not None:
- m.update(str(extra).encode('UTF-8'))
+ # Include the module attributes that change the compilation result
+ # in the fingerprint. We do not iterate over module.__dict__ and
+ # include almost everything here as users might extend Extension
+ # with arbitrary (random) attributes that would lead to cache
+ # misses.
+ m.update(str((
+ module.language,
+ getattr(module, 'py_limited_api', False),
+ getattr(module, 'np_pythran', False)
+ )).encode('UTF-8'))
+
+ m.update(compilation_options.get_fingerprint().encode('UTF-8'))
return m.hexdigest()
except IOError:
return None
@@ -712,7 +743,8 @@
def create_extension_list(patterns, exclude=None, ctx=None, aliases=None, quiet=False, language=None,
exclude_failures=False):
if language is not None:
- print('Please put "# distutils: language=%s" in your .pyx or .pxd file(s)' % language)
+ print('Warning: passing language={0!r} to cythonize() is deprecated. '
+ 'Instead, put "# distutils: language={0}" in your .pyx or .pxd file(s)'.format(language))
if exclude is None:
exclude = []
if patterns is None:
@@ -755,11 +787,11 @@
cython_sources = [s for s in pattern.sources
if os.path.splitext(s)[1] in ('.py', '.pyx')]
if cython_sources:
- filepattern = cython_sources[0]
- if len(cython_sources) > 1:
- print("Warning: Multiple cython sources found for extension '%s': %s\n"
- "See http://cython.readthedocs.io/en/latest/src/userguide/sharing_declarations.html "
- "for sharing declarations among Cython files." % (pattern.name, cython_sources))
+ filepattern = cython_sources[0]
+ if len(cython_sources) > 1:
+ print("Warning: Multiple cython sources found for extension '%s': %s\n"
+ "See http://cython.readthedocs.io/en/latest/src/userguide/sharing_declarations.html "
+ "for sharing declarations among Cython files." % (pattern.name, cython_sources))
else:
# ignore non-cython modules
module_list.append(pattern)
@@ -778,16 +810,15 @@
for file in nonempty(sorted(extended_iglob(filepattern)), "'%s' doesn't match any files" % filepattern):
if os.path.abspath(file) in to_exclude:
continue
- pkg = deps.package(file)
module_name = deps.fully_qualified_name(file)
if '*' in name:
if module_name in explicit_modules:
continue
- elif name != module_name:
- print("Warning: Extension name '%s' does not match fully qualified name '%s' of '%s'" % (
- name, module_name, file))
+ elif name:
module_name = name
+ Utils.raise_error_if_module_name_forbidden(module_name)
+
if module_name not in seen:
try:
kwds = deps.distutils_info(file, aliases, base).values
@@ -818,26 +849,9 @@
# Create the new extension
m, metadata = create_extension(template, kwds)
- if np_pythran:
- if not PythranAvailable:
- raise RuntimeError("You first need to install Pythran to use the np_pythran directive.")
- pythran_ext = pythran.config.make_extension()
- m.include_dirs.extend(pythran_ext['include_dirs'])
- m.extra_compile_args.extend(pythran_ext['extra_compile_args'])
- m.extra_link_args.extend(pythran_ext['extra_link_args'])
- m.define_macros.extend(pythran_ext['define_macros'])
- m.undef_macros.extend(pythran_ext['undef_macros'])
- m.library_dirs.extend(pythran_ext['library_dirs'])
- m.libraries.extend(pythran_ext['libraries'])
- # These options are not compatible with the way normal Cython extensions work
- try:
- m.extra_compile_args.remove("-fwhole-program")
- except ValueError: pass
- try:
- m.extra_compile_args.remove("-fvisibility=hidden")
- except ValueError: pass
- m.language = 'c++'
- m.np_pythran = np_pythran
+ m.np_pythran = np_pythran or getattr(m, 'np_pythran', False)
+ if m.np_pythran:
+ update_pythran_extension(m)
module_list.append(m)
# Store metadata (this will be written as JSON in the
@@ -845,8 +859,13 @@
module_metadata[module_name] = metadata
if file not in m.sources:
- # Old setuptools unconditionally replaces .pyx with .c
- m.sources.remove(file.rsplit('.')[0] + '.c')
+ # Old setuptools unconditionally replaces .pyx with .c/.cpp
+ target_file = os.path.splitext(file)[0] + ('.cpp' if m.language == 'c++' else '.c')
+ try:
+ m.sources.remove(target_file)
+ except ValueError:
+ # never seen this in the wild, but probably better to warn about this unexpected case
+ print("Warning: Cython source file not found in sources list, adding %s" % file)
m.sources.insert(0, file)
seen.add(name)
return module_list, module_metadata
@@ -859,41 +878,80 @@
Compile a set of source modules into C/C++ files and return a list of distutils
Extension objects for them.
- As module list, pass either a glob pattern, a list of glob patterns or a list of
- Extension objects. The latter allows you to configure the extensions separately
- through the normal distutils options.
-
- When using glob patterns, you can exclude certain module names explicitly
- by passing them into the 'exclude' option.
-
- To globally enable C++ mode, you can pass language='c++'. Otherwise, this
- will be determined at a per-file level based on compiler directives. This
- affects only modules found based on file names. Extension instances passed
- into cythonize() will not be changed.
-
- For parallel compilation, set the 'nthreads' option to the number of
- concurrent builds.
-
- For a broad 'try to compile' mode that ignores compilation failures and
- simply excludes the failed extensions, pass 'exclude_failures=True'. Note
- that this only really makes sense for compiling .py files which can also
- be used without compilation.
-
- Additional compilation options can be passed as keyword arguments.
+ :param module_list: As module list, pass either a glob pattern, a list of glob
+ patterns or a list of Extension objects. The latter
+ allows you to configure the extensions separately
+ through the normal distutils options.
+ You can also pass Extension objects that have
+ glob patterns as their sources. Then, cythonize
+ will resolve the pattern and create a
+ copy of the Extension for every matching file.
+
+ :param exclude: When passing glob patterns as ``module_list``, you can exclude certain
+ module names explicitly by passing them into the ``exclude`` option.
+
+ :param nthreads: The number of concurrent builds for parallel compilation
+ (requires the ``multiprocessing`` module).
+
+ :param aliases: If you want to use compiler directives like ``# distutils: ...`` but
+ can only know at compile time (when running the ``setup.py``) which values
+ to use, you can use aliases and pass a dictionary mapping those aliases
+ to Python strings when calling :func:`cythonize`. As an example, say you
+ want to use the compiler
+ directive ``# distutils: include_dirs = ../static_libs/include/``
+ but this path isn't always fixed and you want to find it when running
+ the ``setup.py``. You can then do ``# distutils: include_dirs = MY_HEADERS``,
+ find the value of ``MY_HEADERS`` in the ``setup.py``, put it in a python
+ variable called ``foo`` as a string, and then call
+ ``cythonize(..., aliases={'MY_HEADERS': foo})``.
+
+ :param quiet: If True, Cython won't print error and warning messages during the compilation.
+
+ :param force: Forces the recompilation of the Cython modules, even if the timestamps
+ don't indicate that a recompilation is necessary.
+
+ :param language: To globally enable C++ mode, you can pass ``language='c++'``. Otherwise, this
+ will be determined at a per-file level based on compiler directives. This
+ affects only modules found based on file names. Extension instances passed
+ into :func:`cythonize` will not be changed. It is recommended to rather
+ use the compiler directive ``# distutils: language = c++`` than this option.
+
+ :param exclude_failures: For a broad 'try to compile' mode that ignores compilation
+ failures and simply excludes the failed extensions,
+ pass ``exclude_failures=True``. Note that this only
+ really makes sense for compiling ``.py`` files which can also
+ be used without compilation.
+
+ :param annotate: If ``True``, will produce a HTML file for each of the ``.pyx`` or ``.py``
+ files compiled. The HTML file gives an indication
+ of how much Python interaction there is in
+ each of the source code lines, compared to plain C code.
+ It also allows you to see the C/C++ code
+ generated for each line of Cython code. This report is invaluable when
+ optimizing a function for speed,
+ and for determining when to :ref:`release the GIL `:
+ in general, a ``nogil`` block may contain only "white" code.
+ See examples in :ref:`determining_where_to_add_types` or
+ :ref:`primes`.
+
+ :param compiler_directives: Allow to set compiler directives in the ``setup.py`` like this:
+ ``compiler_directives={'embedsignature': True}``.
+ See :ref:`compiler-directives`.
"""
if exclude is None:
exclude = []
if 'include_path' not in options:
options['include_path'] = ['.']
if 'common_utility_include_dir' in options:
- if options.get('cache'):
- raise NotImplementedError("common_utility_include_dir does not yet work with caching")
safe_makedirs(options['common_utility_include_dir'])
- if PythranAvailable:
- pythran_options = CompilationOptions(**options);
+
+ if pythran is None:
+ pythran_options = None
+ else:
+ pythran_options = CompilationOptions(**options)
pythran_options.cplus = True
pythran_options.np_pythran = True
- pythran_include_dir = os.path.dirname(pythran.__file__)
+
c_options = CompilationOptions(**options)
cpp_options = CompilationOptions(**options); cpp_options.cplus = True
ctx = c_options.create_context()
@@ -909,22 +967,33 @@
deps = create_dependency_tree(ctx, quiet=quiet)
build_dir = getattr(options, 'build_dir', None)
- modules_by_cfile = {}
+ def copy_to_build_dir(filepath, root=os.getcwd()):
+ filepath_abs = os.path.abspath(filepath)
+ if os.path.isabs(filepath):
+ filepath = filepath_abs
+ if filepath_abs.startswith(root):
+ # distutil extension depends are relative to cwd
+ mod_dir = join_path(build_dir,
+ os.path.dirname(_relpath(filepath, root)))
+ copy_once_if_newer(filepath_abs, mod_dir)
+
+ modules_by_cfile = collections.defaultdict(list)
to_compile = []
for m in module_list:
if build_dir:
- root = os.getcwd() # distutil extension depends are relative to cwd
- def copy_to_build_dir(filepath, root=root):
- filepath_abs = os.path.abspath(filepath)
- if os.path.isabs(filepath):
- filepath = filepath_abs
- if filepath_abs.startswith(root):
- mod_dir = join_path(build_dir,
- os.path.dirname(_relpath(filepath, root)))
- copy_once_if_newer(filepath_abs, mod_dir)
for dep in m.depends:
copy_to_build_dir(dep)
+ cy_sources = [
+ source for source in m.sources
+ if os.path.splitext(source)[1] in ('.pyx', '.py')]
+ if len(cy_sources) == 1:
+ # normal "special" case: believe the Extension module name to allow user overrides
+ full_module_name = m.name
+ else:
+ # infer FQMN from source files
+ full_module_name = None
+
new_sources = []
for source in m.sources:
base, ext = os.path.splitext(source)
@@ -941,6 +1010,8 @@
# setup for out of place build directory if enabled
if build_dir:
+ if os.path.isabs(c_file):
+ warnings.warn("build_dir has no effect for absolute source paths")
c_file = os.path.join(build_dir, c_file)
dir = os.path.dirname(c_file)
safe_makedirs_once(dir)
@@ -965,17 +1036,15 @@
else:
print("Compiling %s because it depends on %s." % (source, dep))
if not force and options.cache:
- extra = m.language
- fingerprint = deps.transitive_fingerprint(source, extra)
+ fingerprint = deps.transitive_fingerprint(source, m, options)
else:
fingerprint = None
- to_compile.append((priority, source, c_file, fingerprint, quiet,
- options, not exclude_failures, module_metadata.get(m.name)))
+ to_compile.append((
+ priority, source, c_file, fingerprint, quiet,
+ options, not exclude_failures, module_metadata.get(m.name),
+ full_module_name))
new_sources.append(c_file)
- if c_file not in modules_by_cfile:
- modules_by_cfile[c_file] = [m]
- else:
- modules_by_cfile[c_file].append(m)
+ modules_by_cfile[c_file].append(m)
else:
new_sources.append(source)
if build_dir:
@@ -1091,34 +1160,35 @@
# TODO: Share context? Issue: pyx processing leaks into pxd module
@record_results
-def cythonize_one(pyx_file, c_file, fingerprint, quiet, options=None, raise_on_failure=True, embedded_metadata=None, progress=""):
- from ..Compiler.Main import compile, default_options
+def cythonize_one(pyx_file, c_file, fingerprint, quiet, options=None,
+ raise_on_failure=True, embedded_metadata=None, full_module_name=None,
+ progress=""):
+ from ..Compiler.Main import compile_single, default_options
from ..Compiler.Errors import CompileError, PyrexError
if fingerprint:
if not os.path.exists(options.cache):
- try:
- os.mkdir(options.cache)
- except:
- if not os.path.exists(options.cache):
- raise
+ safe_makedirs(options.cache)
# Cython-generated c files are highly compressible.
# (E.g. a compression ratio of about 10 for Sage).
- fingerprint_file = join_path(
- options.cache, "%s-%s%s" % (os.path.basename(c_file), fingerprint, gzip_ext))
- if os.path.exists(fingerprint_file):
+ fingerprint_file_base = join_path(
+ options.cache, "%s-%s" % (os.path.basename(c_file), fingerprint))
+ gz_fingerprint_file = fingerprint_file_base + gzip_ext
+ zip_fingerprint_file = fingerprint_file_base + '.zip'
+ if os.path.exists(gz_fingerprint_file) or os.path.exists(zip_fingerprint_file):
if not quiet:
print("%sFound compiled %s in cache" % (progress, pyx_file))
- os.utime(fingerprint_file, None)
- g = gzip_open(fingerprint_file, 'rb')
- try:
- f = open(c_file, 'wb')
- try:
- shutil.copyfileobj(g, f)
- finally:
- f.close()
- finally:
- g.close()
+ if os.path.exists(gz_fingerprint_file):
+ os.utime(gz_fingerprint_file, None)
+ with contextlib.closing(gzip_open(gz_fingerprint_file, 'rb')) as g:
+ with contextlib.closing(open(c_file, 'wb')) as f:
+ shutil.copyfileobj(g, f)
+ else:
+ os.utime(zip_fingerprint_file, None)
+ dirname = os.path.dirname(c_file)
+ with contextlib.closing(zipfile.ZipFile(zip_fingerprint_file)) as z:
+ for artifact in z.namelist():
+ z.extract(artifact, os.path.join(dirname, artifact))
return
if not quiet:
print("%sCythonizing %s" % (progress, pyx_file))
@@ -1129,7 +1199,7 @@
any_failures = 0
try:
- result = compile([pyx_file], options)
+ result = compile_single(pyx_file, options, full_module_name=full_module_name)
if result.num_errors > 0:
any_failures = 1
except (EnvironmentError, PyrexError) as e:
@@ -1150,15 +1220,21 @@
elif os.path.exists(c_file):
os.remove(c_file)
elif fingerprint:
- f = open(c_file, 'rb')
- try:
- g = gzip_open(fingerprint_file, 'wb')
- try:
- shutil.copyfileobj(f, g)
- finally:
- g.close()
- finally:
- f.close()
+ artifacts = list(filter(None, [
+ getattr(result, attr, None)
+ for attr in ('c_file', 'h_file', 'api_file', 'i_file')]))
+ if len(artifacts) == 1:
+ fingerprint_file = gz_fingerprint_file
+ with contextlib.closing(open(c_file, 'rb')) as f:
+ with contextlib.closing(gzip_open(fingerprint_file + '.tmp', 'wb')) as g:
+ shutil.copyfileobj(f, g)
+ else:
+ fingerprint_file = zip_fingerprint_file
+ with contextlib.closing(zipfile.ZipFile(
+ fingerprint_file + '.tmp', 'w', zipfile_compression_mode)) as zip:
+ for artifact in artifacts:
+ zip.write(artifact, os.path.basename(artifact))
+ os.rename(fingerprint_file + '.tmp', fingerprint_file)
def cythonize_one_helper(m):
diff -Nru cython-0.26.1/Cython/Build/Inline.py cython-0.29.14/Cython/Build/Inline.py
--- cython-0.26.1/Cython/Build/Inline.py 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Build/Inline.py 2019-05-27 19:37:21.000000000 +0000
@@ -90,7 +90,7 @@
elif 'numpy' in sys.modules and isinstance(arg, sys.modules['numpy'].ndarray):
return 'numpy.ndarray[numpy.%s_t, ndim=%s]' % (arg.dtype.name, arg.ndim)
else:
- for base_type in py_type.mro():
+ for base_type in py_type.__mro__:
if base_type.__module__ in ('__builtin__', 'builtins'):
return 'object'
module = context.find_module(base_type.__module__, need_pxd=False)
@@ -136,8 +136,10 @@
else:
print("Couldn't find %r" % symbol)
-def cython_inline(code, get_type=unsafe_type, lib_dir=os.path.join(get_cython_cache_dir(), 'inline'),
- cython_include_dirs=None, force=False, quiet=False, locals=None, globals=None, **kwds):
+def cython_inline(code, get_type=unsafe_type,
+ lib_dir=os.path.join(get_cython_cache_dir(), 'inline'),
+ cython_include_dirs=None, cython_compiler_directives=None,
+ force=False, quiet=False, locals=None, globals=None, language_level=None, **kwds):
if get_type is None:
get_type = lambda x: 'object'
@@ -169,6 +171,11 @@
if not quiet:
# Parsing from strings not fully supported (e.g. cimports).
print("Could not parse code as a string (to extract unbound symbols).")
+
+ cython_compiler_directives = dict(cython_compiler_directives or {})
+ if language_level is not None:
+ cython_compiler_directives['language_level'] = language_level
+
cimports = []
for name, arg in list(kwds.items()):
if arg is cython_module:
@@ -176,7 +183,7 @@
del kwds[name]
arg_names = sorted(kwds)
arg_sigs = tuple([(get_type(kwds[arg], ctx), arg) for arg in arg_names])
- key = orig_code, arg_sigs, sys.version_info, sys.executable, Cython.__version__
+ key = orig_code, arg_sigs, sys.version_info, sys.executable, language_level, Cython.__version__
module_name = "_cython_inline_" + hashlib.md5(_unicode(key).encode('utf-8')).hexdigest()
if module_name in sys.modules:
@@ -233,7 +240,11 @@
extra_compile_args = cflags)
if build_extension is None:
build_extension = _get_build_extension()
- build_extension.extensions = cythonize([extension], include_path=cython_include_dirs or ['.'], quiet=quiet)
+ build_extension.extensions = cythonize(
+ [extension],
+ include_path=cython_include_dirs or ['.'],
+ compiler_directives=cython_compiler_directives,
+ quiet=quiet)
build_extension.build_temp = os.path.dirname(pyx_file)
build_extension.build_lib = lib_dir
build_extension.run()
diff -Nru cython-0.26.1/Cython/Build/IpythonMagic.py cython-0.29.14/Cython/Build/IpythonMagic.py
--- cython-0.26.1/Cython/Build/IpythonMagic.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Build/IpythonMagic.py 2019-11-01 14:13:39.000000000 +0000
@@ -14,7 +14,7 @@
Usage
=====
-To enable the magics below, execute ``%load_ext cythonmagic``.
+To enable the magics below, execute ``%load_ext cython``.
``%%cython``
@@ -52,6 +52,10 @@
import re
import sys
import time
+import copy
+import distutils.log
+import textwrap
+
try:
reload
@@ -83,6 +87,20 @@
from .Dependencies import cythonize
+PGO_CONFIG = {
+ 'gcc': {
+ 'gen': ['-fprofile-generate', '-fprofile-dir={TEMPDIR}'],
+ 'use': ['-fprofile-use', '-fprofile-correction', '-fprofile-dir={TEMPDIR}'],
+ },
+ # blind copy from 'configure' script in CPython 3.7
+ 'icc': {
+ 'gen': ['-prof-gen'],
+ 'use': ['-prof-use'],
+ }
+}
+PGO_CONFIG['mingw32'] = PGO_CONFIG['gcc']
+
+
@magics_class
class CythonMagics(Magics):
@@ -148,11 +166,15 @@
f.write(cell)
if 'pyximport' not in sys.modules or not self._pyximport_installed:
import pyximport
- pyximport.install(reload_support=True)
+ pyximport.install()
self._pyximport_installed = True
if module_name in self._reloads:
module = self._reloads[module_name]
- reload(module)
+ # Note: reloading extension modules is not actually supported
+ # (requires PEP-489 reinitialisation support).
+ # Don't know why this should ever have worked as it reads here.
+ # All we really need to do is to update the globals below.
+ #reload(module)
else:
__import__(module_name)
module = sys.modules[module_name]
@@ -161,6 +183,14 @@
@magic_arguments.magic_arguments()
@magic_arguments.argument(
+ '-a', '--annotate', action='store_true', default=False,
+ help="Produce a colorized HTML version of the source."
+ )
+ @magic_arguments.argument(
+ '-+', '--cplus', action='store_true', default=False,
+ help="Output a C++ rather than C file."
+ )
+ @magic_arguments.argument(
'-3', dest='language_level', action='store_const', const=3, default=None,
help="Select Python 3 syntax."
)
@@ -169,6 +199,11 @@
help="Select Python 2 syntax."
)
@magic_arguments.argument(
+ '-f', '--force', action='store_true', default=False,
+ help="Force the compilation of a new module, even if the source has been "
+ "previously compiled."
+ )
+ @magic_arguments.argument(
'-c', '--compile-args', action='append', default=[],
help="Extra flags to pass to compiler via the `extra_compile_args` "
"Extension flag (can be specified multiple times)."
@@ -203,17 +238,14 @@
"multiple times)."
)
@magic_arguments.argument(
- '-+', '--cplus', action='store_true', default=False,
- help="Output a C++ rather than C file."
+ '--pgo', dest='pgo', action='store_true', default=False,
+ help=("Enable profile guided optimisation in the C compiler. "
+ "Compiles the cell twice and executes it in between to generate a runtime profile.")
)
@magic_arguments.argument(
- '-f', '--force', action='store_true', default=False,
- help="Force the compilation of a new module, even if the source has been "
- "previously compiled."
- )
- @magic_arguments.argument(
- '-a', '--annotate', action='store_true', default=False,
- help="Produce a colorized HTML version of the source."
+ '--verbose', dest='quiet', action='store_false', default=True,
+ help=("Print debug information like generated .c/.cpp file location "
+ "and exact gcc/g++ command invoked.")
)
@cell_magic
def cython(self, line, cell):
@@ -235,77 +267,78 @@
%%cython --compile-args=-fopenmp --link-args=-fopenmp
...
+
+ To enable profile guided optimisation, pass the ``--pgo`` option.
+ Note that the cell itself needs to take care of establishing a suitable
+ profile when executed. This can be done by implementing the functions to
+ optimise, and then calling them directly in the same cell on some realistic
+ training data like this::
+
+ %%cython --pgo
+ def critical_function(data):
+ for item in data:
+ ...
+
+ # execute function several times to build profile
+ from somewhere import some_typical_data
+ for _ in range(100):
+ critical_function(some_typical_data)
+
+ In Python 3.5 and later, you can distinguish between the profile and
+ non-profile runs as follows::
+
+ if "_pgo_" in __name__:
+ ... # execute critical code here
"""
args = magic_arguments.parse_argstring(self.cython, line)
code = cell if cell.endswith('\n') else cell + '\n'
lib_dir = os.path.join(get_ipython_cache_dir(), 'cython')
- quiet = True
- key = code, line, sys.version_info, sys.executable, cython_version
+ key = (code, line, sys.version_info, sys.executable, cython_version)
if not os.path.exists(lib_dir):
os.makedirs(lib_dir)
+ if args.pgo:
+ key += ('pgo',)
if args.force:
# Force a new module name by adding the current time to the
# key which is hashed to determine the module name.
- key += time.time(),
+ key += (time.time(),)
if args.name:
module_name = py3compat.unicode_to_str(args.name)
else:
module_name = "_cython_magic_" + hashlib.md5(str(key).encode('utf-8')).hexdigest()
+ html_file = os.path.join(lib_dir, module_name + '.html')
module_path = os.path.join(lib_dir, module_name + self.so_ext)
have_module = os.path.isfile(module_path)
- need_cythonize = not have_module
+ need_cythonize = args.pgo or not have_module
if args.annotate:
- html_file = os.path.join(lib_dir, module_name + '.html')
if not os.path.isfile(html_file):
need_cythonize = True
+ extension = None
if need_cythonize:
- c_include_dirs = args.include
- c_src_files = list(map(str, args.src))
- if 'numpy' in code:
- import numpy
- c_include_dirs.append(numpy.get_include())
- pyx_file = os.path.join(lib_dir, module_name + '.pyx')
- pyx_file = py3compat.cast_bytes_py2(pyx_file, encoding=sys.getfilesystemencoding())
- with io.open(pyx_file, 'w', encoding='utf-8') as f:
- f.write(code)
- extension = Extension(
- name=module_name,
- sources=[pyx_file] + c_src_files,
- include_dirs=c_include_dirs,
- library_dirs=args.library_dirs,
- extra_compile_args=args.compile_args,
- extra_link_args=args.link_args,
- libraries=args.lib,
- language='c++' if args.cplus else 'c',
- )
- build_extension = self._get_build_extension()
- try:
- opts = dict(
- quiet=quiet,
- annotate=args.annotate,
- force=True,
- )
- if args.language_level is not None:
- assert args.language_level in (2, 3)
- opts['language_level'] = args.language_level
- elif sys.version_info[0] > 2:
- opts['language_level'] = 3
- build_extension.extensions = cythonize([extension], **opts)
- except CompileError:
- return
-
- if not have_module:
- build_extension.build_temp = os.path.dirname(pyx_file)
- build_extension.build_lib = lib_dir
- build_extension.run()
+ extensions = self._cythonize(module_name, code, lib_dir, args, quiet=args.quiet)
+ if extensions is None:
+ # Compilation failed and printed error message
+ return None
+ assert len(extensions) == 1
+ extension = extensions[0]
self._code_cache[key] = module_name
+ if args.pgo:
+ self._profile_pgo_wrapper(extension, lib_dir)
+
+ try:
+ self._build_extension(extension, lib_dir, pgo_step_name='use' if args.pgo else None,
+ quiet=args.quiet)
+ except distutils.errors.CompileError:
+ # Build failed and printed error message
+ return None
+
module = imp.load_dynamic(module_name, module_path)
self._import_all(module)
@@ -324,6 +357,129 @@
else:
return display.HTML(self.clean_annotated_html(annotated_html))
+ def _profile_pgo_wrapper(self, extension, lib_dir):
+ """
+ Generate a .c file for a separate extension module that calls the
+ module init function of the original module. This makes sure that the
+ PGO profiler sees the correct .o file of the final module, but it still
+ allows us to import the module under a different name for profiling,
+ before recompiling it into the PGO optimised module. Overwriting and
+ reimporting the same shared library is not portable.
+ """
+ extension = copy.copy(extension) # shallow copy, do not modify sources in place!
+ module_name = extension.name
+ pgo_module_name = '_pgo_' + module_name
+ pgo_wrapper_c_file = os.path.join(lib_dir, pgo_module_name + '.c')
+ with io.open(pgo_wrapper_c_file, 'w', encoding='utf-8') as f:
+ f.write(textwrap.dedent(u"""
+ #include "Python.h"
+ #if PY_MAJOR_VERSION < 3
+ extern PyMODINIT_FUNC init%(module_name)s(void);
+ PyMODINIT_FUNC init%(pgo_module_name)s(void); /*proto*/
+ PyMODINIT_FUNC init%(pgo_module_name)s(void) {
+ PyObject *sys_modules;
+ init%(module_name)s(); if (PyErr_Occurred()) return;
+ sys_modules = PyImport_GetModuleDict(); /* borrowed, no exception, "never" fails */
+ if (sys_modules) {
+ PyObject *module = PyDict_GetItemString(sys_modules, "%(module_name)s"); if (!module) return;
+ PyDict_SetItemString(sys_modules, "%(pgo_module_name)s", module);
+ Py_DECREF(module);
+ }
+ }
+ #else
+ extern PyMODINIT_FUNC PyInit_%(module_name)s(void);
+ PyMODINIT_FUNC PyInit_%(pgo_module_name)s(void); /*proto*/
+ PyMODINIT_FUNC PyInit_%(pgo_module_name)s(void) {
+ return PyInit_%(module_name)s();
+ }
+ #endif
+ """ % {'module_name': module_name, 'pgo_module_name': pgo_module_name}))
+
+ extension.sources = extension.sources + [pgo_wrapper_c_file] # do not modify in place!
+ extension.name = pgo_module_name
+
+ self._build_extension(extension, lib_dir, pgo_step_name='gen')
+
+ # import and execute module code to generate profile
+ so_module_path = os.path.join(lib_dir, pgo_module_name + self.so_ext)
+ imp.load_dynamic(pgo_module_name, so_module_path)
+
+ def _cythonize(self, module_name, code, lib_dir, args, quiet=True):
+ pyx_file = os.path.join(lib_dir, module_name + '.pyx')
+ pyx_file = py3compat.cast_bytes_py2(pyx_file, encoding=sys.getfilesystemencoding())
+
+ c_include_dirs = args.include
+ c_src_files = list(map(str, args.src))
+ if 'numpy' in code:
+ import numpy
+ c_include_dirs.append(numpy.get_include())
+ with io.open(pyx_file, 'w', encoding='utf-8') as f:
+ f.write(code)
+ extension = Extension(
+ name=module_name,
+ sources=[pyx_file] + c_src_files,
+ include_dirs=c_include_dirs,
+ library_dirs=args.library_dirs,
+ extra_compile_args=args.compile_args,
+ extra_link_args=args.link_args,
+ libraries=args.lib,
+ language='c++' if args.cplus else 'c',
+ )
+ try:
+ opts = dict(
+ quiet=quiet,
+ annotate=args.annotate,
+ force=True,
+ )
+ if args.language_level is not None:
+ assert args.language_level in (2, 3)
+ opts['language_level'] = args.language_level
+ elif sys.version_info[0] >= 3:
+ opts['language_level'] = 3
+ return cythonize([extension], **opts)
+ except CompileError:
+ return None
+
+ def _build_extension(self, extension, lib_dir, temp_dir=None, pgo_step_name=None, quiet=True):
+ build_extension = self._get_build_extension(
+ extension, lib_dir=lib_dir, temp_dir=temp_dir, pgo_step_name=pgo_step_name)
+ old_threshold = None
+ try:
+ if not quiet:
+ old_threshold = distutils.log.set_threshold(distutils.log.DEBUG)
+ build_extension.run()
+ finally:
+ if not quiet and old_threshold is not None:
+ distutils.log.set_threshold(old_threshold)
+
+ def _add_pgo_flags(self, build_extension, step_name, temp_dir):
+ compiler_type = build_extension.compiler.compiler_type
+ if compiler_type == 'unix':
+ compiler_cmd = build_extension.compiler.compiler_so
+ # TODO: we could try to call "[cmd] --version" for better insights
+ if not compiler_cmd:
+ pass
+ elif 'clang' in compiler_cmd or 'clang' in compiler_cmd[0]:
+ compiler_type = 'clang'
+ elif 'icc' in compiler_cmd or 'icc' in compiler_cmd[0]:
+ compiler_type = 'icc'
+ elif 'gcc' in compiler_cmd or 'gcc' in compiler_cmd[0]:
+ compiler_type = 'gcc'
+ elif 'g++' in compiler_cmd or 'g++' in compiler_cmd[0]:
+ compiler_type = 'gcc'
+ config = PGO_CONFIG.get(compiler_type)
+ orig_flags = []
+ if config and step_name in config:
+ flags = [f.format(TEMPDIR=temp_dir) for f in config[step_name]]
+ for extension in build_extension.extensions:
+ orig_flags.append((extension.extra_compile_args, extension.extra_link_args))
+ extension.extra_compile_args = extension.extra_compile_args + flags
+ extension.extra_link_args = extension.extra_link_args + flags
+ else:
+ print("No PGO %s configuration known for C compiler type '%s'" % (step_name, compiler_type),
+ file=sys.stderr)
+ return orig_flags
+
@property
def so_ext(self):
"""The extension suffix for compiled modules."""
@@ -345,7 +501,8 @@
else:
_path_created.clear()
- def _get_build_extension(self):
+ def _get_build_extension(self, extension=None, lib_dir=None, temp_dir=None,
+ pgo_step_name=None, _build_ext=build_ext):
self._clear_distutils_mkpath_cache()
dist = Distribution()
config_files = dist.find_config_files()
@@ -354,8 +511,28 @@
except ValueError:
pass
dist.parse_config_files(config_files)
- build_extension = build_ext(dist)
+
+ if not temp_dir:
+ temp_dir = lib_dir
+ add_pgo_flags = self._add_pgo_flags
+
+ if pgo_step_name:
+ base_build_ext = _build_ext
+ class _build_ext(_build_ext):
+ def build_extensions(self):
+ add_pgo_flags(self, pgo_step_name, temp_dir)
+ base_build_ext.build_extensions(self)
+
+ build_extension = _build_ext(dist)
build_extension.finalize_options()
+ if temp_dir:
+ temp_dir = py3compat.cast_bytes_py2(temp_dir, encoding=sys.getfilesystemencoding())
+ build_extension.build_temp = temp_dir
+ if lib_dir:
+ lib_dir = py3compat.cast_bytes_py2(lib_dir, encoding=sys.getfilesystemencoding())
+ build_extension.build_lib = lib_dir
+ if extension is not None:
+ build_extension.extensions = [extension]
return build_extension
@staticmethod
diff -Nru cython-0.26.1/Cython/Build/Tests/TestCyCache.py cython-0.29.14/Cython/Build/Tests/TestCyCache.py
--- cython-0.26.1/Cython/Build/Tests/TestCyCache.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/Cython/Build/Tests/TestCyCache.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,106 @@
+import difflib
+import glob
+import gzip
+import os
+import tempfile
+
+import Cython.Build.Dependencies
+import Cython.Utils
+from Cython.TestUtils import CythonTest
+
+
+class TestCyCache(CythonTest):
+
+ def setUp(self):
+ CythonTest.setUp(self)
+ self.temp_dir = tempfile.mkdtemp(
+ prefix='cycache-test',
+ dir='TEST_TMP' if os.path.isdir('TEST_TMP') else None)
+ self.src_dir = tempfile.mkdtemp(prefix='src', dir=self.temp_dir)
+ self.cache_dir = tempfile.mkdtemp(prefix='cache', dir=self.temp_dir)
+
+ def cache_files(self, file_glob):
+ return glob.glob(os.path.join(self.cache_dir, file_glob))
+
+ def fresh_cythonize(self, *args, **kwargs):
+ Cython.Utils.clear_function_caches()
+ Cython.Build.Dependencies._dep_tree = None # discard method caches
+ Cython.Build.Dependencies.cythonize(*args, **kwargs)
+
+ def test_cycache_switch(self):
+ content1 = 'value = 1\n'
+ content2 = 'value = 2\n'
+ a_pyx = os.path.join(self.src_dir, 'a.pyx')
+ a_c = a_pyx[:-4] + '.c'
+
+ open(a_pyx, 'w').write(content1)
+ self.fresh_cythonize(a_pyx, cache=self.cache_dir)
+ self.fresh_cythonize(a_pyx, cache=self.cache_dir)
+ self.assertEqual(1, len(self.cache_files('a.c*')))
+ a_contents1 = open(a_c).read()
+ os.unlink(a_c)
+
+ open(a_pyx, 'w').write(content2)
+ self.fresh_cythonize(a_pyx, cache=self.cache_dir)
+ a_contents2 = open(a_c).read()
+ os.unlink(a_c)
+
+ self.assertNotEqual(a_contents1, a_contents2, 'C file not changed!')
+ self.assertEqual(2, len(self.cache_files('a.c*')))
+
+ open(a_pyx, 'w').write(content1)
+ self.fresh_cythonize(a_pyx, cache=self.cache_dir)
+ self.assertEqual(2, len(self.cache_files('a.c*')))
+ a_contents = open(a_c).read()
+ self.assertEqual(
+ a_contents, a_contents1,
+ msg='\n'.join(list(difflib.unified_diff(
+ a_contents.split('\n'), a_contents1.split('\n')))[:10]))
+
+ def test_cycache_uses_cache(self):
+ a_pyx = os.path.join(self.src_dir, 'a.pyx')
+ a_c = a_pyx[:-4] + '.c'
+ open(a_pyx, 'w').write('pass')
+ self.fresh_cythonize(a_pyx, cache=self.cache_dir)
+ a_cache = os.path.join(self.cache_dir, os.listdir(self.cache_dir)[0])
+ gzip.GzipFile(a_cache, 'wb').write('fake stuff'.encode('ascii'))
+ os.unlink(a_c)
+ self.fresh_cythonize(a_pyx, cache=self.cache_dir)
+ a_contents = open(a_c).read()
+ self.assertEqual(a_contents, 'fake stuff',
+ 'Unexpected contents: %s...' % a_contents[:100])
+
+ def test_multi_file_output(self):
+ a_pyx = os.path.join(self.src_dir, 'a.pyx')
+ a_c = a_pyx[:-4] + '.c'
+ a_h = a_pyx[:-4] + '.h'
+ a_api_h = a_pyx[:-4] + '_api.h'
+ open(a_pyx, 'w').write('cdef public api int foo(int x): return x\n')
+ self.fresh_cythonize(a_pyx, cache=self.cache_dir)
+ expected = [a_c, a_h, a_api_h]
+ for output in expected:
+ self.assertTrue(os.path.exists(output), output)
+ os.unlink(output)
+ self.fresh_cythonize(a_pyx, cache=self.cache_dir)
+ for output in expected:
+ self.assertTrue(os.path.exists(output), output)
+
+ def test_options_invalidation(self):
+ hash_pyx = os.path.join(self.src_dir, 'options.pyx')
+ hash_c = hash_pyx[:-len('.pyx')] + '.c'
+
+ open(hash_pyx, 'w').write('pass')
+ self.fresh_cythonize(hash_pyx, cache=self.cache_dir, cplus=False)
+ self.assertEqual(1, len(self.cache_files('options.c*')))
+
+ os.unlink(hash_c)
+ self.fresh_cythonize(hash_pyx, cache=self.cache_dir, cplus=True)
+ self.assertEqual(2, len(self.cache_files('options.c*')))
+
+ os.unlink(hash_c)
+ self.fresh_cythonize(hash_pyx, cache=self.cache_dir, cplus=False, show_version=False)
+ self.assertEqual(2, len(self.cache_files('options.c*')))
+
+ os.unlink(hash_c)
+ self.fresh_cythonize(hash_pyx, cache=self.cache_dir, cplus=False, show_version=True)
+ self.assertEqual(2, len(self.cache_files('options.c*')))
diff -Nru cython-0.26.1/Cython/Build/Tests/TestInline.py cython-0.29.14/Cython/Build/Tests/TestInline.py
--- cython-0.26.1/Cython/Build/Tests/TestInline.py 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Build/Tests/TestInline.py 2019-05-27 19:37:21.000000000 +0000
@@ -51,6 +51,12 @@
foo = inline("def foo(x): return x * x", **self.test_kwds)['foo']
self.assertEquals(foo(7), 49)
+ def test_class_ref(self):
+ class Type(object):
+ pass
+ tp = inline("Type")['Type']
+ self.assertEqual(tp, Type)
+
def test_pure(self):
import cython as cy
b = inline("""
@@ -60,6 +66,14 @@
""", a=3, **self.test_kwds)
self.assertEquals(type(b), float)
+ def test_compiler_directives(self):
+ self.assertEqual(
+ inline('return sum(x)',
+ x=[1, 2, 3],
+ cython_compiler_directives={'boundscheck': False}),
+ 6
+ )
+
if has_numpy:
def test_numpy(self):
diff -Nru cython-0.26.1/Cython/Build/Tests/TestIpythonMagic.py cython-0.29.14/Cython/Build/Tests/TestIpythonMagic.py
--- cython-0.26.1/Cython/Build/Tests/TestIpythonMagic.py 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Build/Tests/TestIpythonMagic.py 2018-09-22 14:18:56.000000000 +0000
@@ -3,25 +3,38 @@
"""Tests for the Cython magics extension."""
+from __future__ import absolute_import
+
import os
import sys
+from contextlib import contextmanager
+from Cython.Build import IpythonMagic
+from Cython.TestUtils import CythonTest
try:
- from IPython.testing.globalipapp import get_ipython
+ import IPython.testing.globalipapp
from IPython.utils import py3compat
-except:
- __test__ = False
+except ImportError:
+ # Disable tests and fake helpers for initialisation below.
+ class _py3compat(object):
+ def str_to_unicode(self, s):
+ return s
+
+ py3compat = _py3compat()
+
+ def skip_if_not_installed(_):
+ return None
+else:
+ def skip_if_not_installed(c):
+ return c
try:
- # disable IPython history thread to avoid having to clean it up
+ # disable IPython history thread before it gets started to avoid having to clean it up
from IPython.core.history import HistoryManager
HistoryManager.enabled = False
except ImportError:
pass
-from Cython.TestUtils import CythonTest
-
-ip = get_ipython()
code = py3compat.str_to_unicode("""\
def f(x):
return 2*x
@@ -35,6 +48,12 @@
return f(*(x,))
""")
+pgo_cython3_code = cython3_code + py3compat.str_to_unicode("""\
+def main():
+ for _ in range(100): call(5)
+main()
+""")
+
if sys.platform == 'win32':
# not using IPython's decorators here because they depend on "nose"
@@ -55,19 +74,27 @@
return _skip_win32
+@skip_if_not_installed
class TestIPythonMagic(CythonTest):
+ @classmethod
+ def setUpClass(cls):
+ CythonTest.setUpClass()
+ cls._ip = IPython.testing.globalipapp.get_ipython()
+
def setUp(self):
CythonTest.setUp(self)
- ip.extension_manager.load_extension('cython')
+ self._ip.extension_manager.load_extension('cython')
def test_cython_inline(self):
+ ip = self._ip
ip.ex('a=10; b=20')
result = ip.run_cell_magic('cython_inline', '', 'return a+b')
self.assertEqual(result, 30)
@skip_win32('Skip on Windows')
def test_cython_pyximport(self):
+ ip = self._ip
module_name = '_test_cython_pyximport'
ip.run_cell_magic('cython_pyximport', module_name, code)
ip.ex('g = f(10)')
@@ -81,12 +108,14 @@
pass
def test_cython(self):
+ ip = self._ip
ip.run_cell_magic('cython', '', code)
ip.ex('g = f(10)')
self.assertEqual(ip.user_ns['g'], 20.0)
def test_cython_name(self):
# The Cython module named 'mymodule' defines the function f.
+ ip = self._ip
ip.run_cell_magic('cython', '--name=mymodule', code)
# This module can now be imported in the interactive namespace.
ip.ex('import mymodule; g = mymodule.f(10)')
@@ -94,6 +123,7 @@
def test_cython_language_level(self):
# The Cython cell defines the functions f() and call().
+ ip = self._ip
ip.run_cell_magic('cython', '', cython3_code)
ip.ex('g = f(10); h = call(10)')
if sys.version_info[0] < 3:
@@ -105,6 +135,7 @@
def test_cython3(self):
# The Cython cell defines the functions f() and call().
+ ip = self._ip
ip.run_cell_magic('cython', '-3', cython3_code)
ip.ex('g = f(10); h = call(10)')
self.assertEqual(ip.user_ns['g'], 2.0 / 10.0)
@@ -112,13 +143,24 @@
def test_cython2(self):
# The Cython cell defines the functions f() and call().
+ ip = self._ip
ip.run_cell_magic('cython', '-2', cython3_code)
ip.ex('g = f(10); h = call(10)')
self.assertEqual(ip.user_ns['g'], 2 // 10)
self.assertEqual(ip.user_ns['h'], 2 // 10)
@skip_win32('Skip on Windows')
+ def test_cython3_pgo(self):
+ # The Cython cell defines the functions f() and call().
+ ip = self._ip
+ ip.run_cell_magic('cython', '-3 --pgo', pgo_cython3_code)
+ ip.ex('g = f(10); h = call(10); main()')
+ self.assertEqual(ip.user_ns['g'], 2.0 / 10.0)
+ self.assertEqual(ip.user_ns['h'], 2.0 / 10.0)
+
+ @skip_win32('Skip on Windows')
def test_extlibs(self):
+ ip = self._ip
code = py3compat.str_to_unicode("""
from libc.math cimport sin
x = sin(0.0)
@@ -126,3 +168,45 @@
ip.user_ns['x'] = 1
ip.run_cell_magic('cython', '-l m', code)
self.assertEqual(ip.user_ns['x'], 0)
+
+
+ def test_cython_verbose(self):
+ ip = self._ip
+ ip.run_cell_magic('cython', '--verbose', code)
+ ip.ex('g = f(10)')
+ self.assertEqual(ip.user_ns['g'], 20.0)
+
+ def test_cython_verbose_thresholds(self):
+ @contextmanager
+ def mock_distutils():
+ class MockLog:
+ DEBUG = 1
+ INFO = 2
+ thresholds = [INFO]
+
+ def set_threshold(self, val):
+ self.thresholds.append(val)
+ return self.thresholds[-2]
+
+
+ new_log = MockLog()
+ old_log = IpythonMagic.distutils.log
+ try:
+ IpythonMagic.distutils.log = new_log
+ yield new_log
+ finally:
+ IpythonMagic.distutils.log = old_log
+
+ ip = self._ip
+ with mock_distutils() as verbose_log:
+ ip.run_cell_magic('cython', '--verbose', code)
+ ip.ex('g = f(10)')
+ self.assertEqual(ip.user_ns['g'], 20.0)
+ self.assertEquals([verbose_log.INFO, verbose_log.DEBUG, verbose_log.INFO],
+ verbose_log.thresholds)
+
+ with mock_distutils() as normal_log:
+ ip.run_cell_magic('cython', '', code)
+ ip.ex('g = f(10)')
+ self.assertEqual(ip.user_ns['g'], 20.0)
+ self.assertEquals([normal_log.INFO], normal_log.thresholds)
diff -Nru cython-0.26.1/Cython/Build/Tests/TestStripLiterals.py cython-0.29.14/Cython/Build/Tests/TestStripLiterals.py
--- cython-0.26.1/Cython/Build/Tests/TestStripLiterals.py 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Build/Tests/TestStripLiterals.py 2018-11-24 09:20:06.000000000 +0000
@@ -6,10 +6,10 @@
def t(self, before, expected):
actual, literals = strip_string_literals(before, prefix="_L")
- self.assertEquals(expected, actual)
+ self.assertEqual(expected, actual)
for key, value in literals.items():
actual = actual.replace(key, value)
- self.assertEquals(before, actual)
+ self.assertEqual(before, actual)
def test_empty(self):
self.t("", "")
diff -Nru cython-0.26.1/Cython/CodeWriter.py cython-0.29.14/Cython/CodeWriter.py
--- cython-0.26.1/Cython/CodeWriter.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/CodeWriter.py 2018-09-22 14:18:56.000000000 +0000
@@ -363,7 +363,7 @@
self.dedent()
def visit_IfStatNode(self, node):
- # The IfClauseNode is handled directly without a seperate match
+ # The IfClauseNode is handled directly without a separate match
# for clariy.
self.startline(u"if ")
self.visit(node.if_clauses[0].condition)
@@ -519,3 +519,298 @@
def visit_StatNode(self, node):
pass
+
+
+class ExpressionWriter(TreeVisitor):
+
+ def __init__(self, result=None):
+ super(ExpressionWriter, self).__init__()
+ if result is None:
+ result = u""
+ self.result = result
+ self.precedence = [0]
+
+ def write(self, tree):
+ self.visit(tree)
+ return self.result
+
+ def put(self, s):
+ self.result += s
+
+ def remove(self, s):
+ if self.result.endswith(s):
+ self.result = self.result[:-len(s)]
+
+ def comma_separated_list(self, items):
+ if len(items) > 0:
+ for item in items[:-1]:
+ self.visit(item)
+ self.put(u", ")
+ self.visit(items[-1])
+
+ def visit_Node(self, node):
+ raise AssertionError("Node not handled by serializer: %r" % node)
+
+ def visit_NameNode(self, node):
+ self.put(node.name)
+
+ def visit_NoneNode(self, node):
+ self.put(u"None")
+
+ def visit_EllipsisNode(self, node):
+ self.put(u"...")
+
+ def visit_BoolNode(self, node):
+ self.put(str(node.value))
+
+ def visit_ConstNode(self, node):
+ self.put(str(node.value))
+
+ def visit_ImagNode(self, node):
+ self.put(node.value)
+ self.put(u"j")
+
+ def emit_string(self, node, prefix=u""):
+ repr_val = repr(node.value)
+ if repr_val[0] in 'ub':
+ repr_val = repr_val[1:]
+ self.put(u"%s%s" % (prefix, repr_val))
+
+ def visit_BytesNode(self, node):
+ self.emit_string(node, u"b")
+
+ def visit_StringNode(self, node):
+ self.emit_string(node)
+
+ def visit_UnicodeNode(self, node):
+ self.emit_string(node, u"u")
+
+ def emit_sequence(self, node, parens=(u"", u"")):
+ open_paren, close_paren = parens
+ items = node.subexpr_nodes()
+ self.put(open_paren)
+ self.comma_separated_list(items)
+ self.put(close_paren)
+
+ def visit_ListNode(self, node):
+ self.emit_sequence(node, u"[]")
+
+ def visit_TupleNode(self, node):
+ self.emit_sequence(node, u"()")
+
+ def visit_SetNode(self, node):
+ if len(node.subexpr_nodes()) > 0:
+ self.emit_sequence(node, u"{}")
+ else:
+ self.put(u"set()")
+
+ def visit_DictNode(self, node):
+ self.emit_sequence(node, u"{}")
+
+ def visit_DictItemNode(self, node):
+ self.visit(node.key)
+ self.put(u": ")
+ self.visit(node.value)
+
+ unop_precedence = {
+ 'not': 3, '!': 3,
+ '+': 11, '-': 11, '~': 11,
+ }
+ binop_precedence = {
+ 'or': 1,
+ 'and': 2,
+ # unary: 'not': 3, '!': 3,
+ 'in': 4, 'not_in': 4, 'is': 4, 'is_not': 4, '<': 4, '<=': 4, '>': 4, '>=': 4, '!=': 4, '==': 4,
+ '|': 5,
+ '^': 6,
+ '&': 7,
+ '<<': 8, '>>': 8,
+ '+': 9, '-': 9,
+ '*': 10, '@': 10, '/': 10, '//': 10, '%': 10,
+ # unary: '+': 11, '-': 11, '~': 11
+ '**': 12,
+ }
+
+ def operator_enter(self, new_prec):
+ old_prec = self.precedence[-1]
+ if old_prec > new_prec:
+ self.put(u"(")
+ self.precedence.append(new_prec)
+
+ def operator_exit(self):
+ old_prec, new_prec = self.precedence[-2:]
+ if old_prec > new_prec:
+ self.put(u")")
+ self.precedence.pop()
+
+ def visit_NotNode(self, node):
+ op = 'not'
+ prec = self.unop_precedence[op]
+ self.operator_enter(prec)
+ self.put(u"not ")
+ self.visit(node.operand)
+ self.operator_exit()
+
+ def visit_UnopNode(self, node):
+ op = node.operator
+ prec = self.unop_precedence[op]
+ self.operator_enter(prec)
+ self.put(u"%s" % node.operator)
+ self.visit(node.operand)
+ self.operator_exit()
+
+ def visit_BinopNode(self, node):
+ op = node.operator
+ prec = self.binop_precedence.get(op, 0)
+ self.operator_enter(prec)
+ self.visit(node.operand1)
+ self.put(u" %s " % op.replace('_', ' '))
+ self.visit(node.operand2)
+ self.operator_exit()
+
+ def visit_BoolBinopNode(self, node):
+ self.visit_BinopNode(node)
+
+ def visit_PrimaryCmpNode(self, node):
+ self.visit_BinopNode(node)
+
+ def visit_IndexNode(self, node):
+ self.visit(node.base)
+ self.put(u"[")
+ if isinstance(node.index, TupleNode):
+ self.emit_sequence(node.index)
+ else:
+ self.visit(node.index)
+ self.put(u"]")
+
+ def visit_SliceIndexNode(self, node):
+ self.visit(node.base)
+ self.put(u"[")
+ if node.start:
+ self.visit(node.start)
+ self.put(u":")
+ if node.stop:
+ self.visit(node.stop)
+ if node.slice:
+ self.put(u":")
+ self.visit(node.slice)
+ self.put(u"]")
+
+ def visit_SliceNode(self, node):
+ if not node.start.is_none:
+ self.visit(node.start)
+ self.put(u":")
+ if not node.stop.is_none:
+ self.visit(node.stop)
+ if not node.step.is_none:
+ self.put(u":")
+ self.visit(node.step)
+
+ def visit_CondExprNode(self, node):
+ self.visit(node.true_val)
+ self.put(u" if ")
+ self.visit(node.test)
+ self.put(u" else ")
+ self.visit(node.false_val)
+
+ def visit_AttributeNode(self, node):
+ self.visit(node.obj)
+ self.put(u".%s" % node.attribute)
+
+ def visit_SimpleCallNode(self, node):
+ self.visit(node.function)
+ self.put(u"(")
+ self.comma_separated_list(node.args)
+ self.put(")")
+
+ def emit_pos_args(self, node):
+ if node is None:
+ return
+ if isinstance(node, AddNode):
+ self.emit_pos_args(node.operand1)
+ self.emit_pos_args(node.operand2)
+ elif isinstance(node, TupleNode):
+ for expr in node.subexpr_nodes():
+ self.visit(expr)
+ self.put(u", ")
+ elif isinstance(node, AsTupleNode):
+ self.put("*")
+ self.visit(node.arg)
+ self.put(u", ")
+ else:
+ self.visit(node)
+ self.put(u", ")
+
+ def emit_kwd_args(self, node):
+ if node is None:
+ return
+ if isinstance(node, MergedDictNode):
+ for expr in node.subexpr_nodes():
+ self.emit_kwd_args(expr)
+ elif isinstance(node, DictNode):
+ for expr in node.subexpr_nodes():
+ self.put(u"%s=" % expr.key.value)
+ self.visit(expr.value)
+ self.put(u", ")
+ else:
+ self.put(u"**")
+ self.visit(node)
+ self.put(u", ")
+
+ def visit_GeneralCallNode(self, node):
+ self.visit(node.function)
+ self.put(u"(")
+ self.emit_pos_args(node.positional_args)
+ self.emit_kwd_args(node.keyword_args)
+ self.remove(u", ")
+ self.put(")")
+
+ def emit_comprehension(self, body, target,
+ sequence, condition,
+ parens=(u"", u"")):
+ open_paren, close_paren = parens
+ self.put(open_paren)
+ self.visit(body)
+ self.put(u" for ")
+ self.visit(target)
+ self.put(u" in ")
+ self.visit(sequence)
+ if condition:
+ self.put(u" if ")
+ self.visit(condition)
+ self.put(close_paren)
+
+ def visit_ComprehensionAppendNode(self, node):
+ self.visit(node.expr)
+
+ def visit_DictComprehensionAppendNode(self, node):
+ self.visit(node.key_expr)
+ self.put(u": ")
+ self.visit(node.value_expr)
+
+ def visit_ComprehensionNode(self, node):
+ tpmap = {'list': u"[]", 'dict': u"{}", 'set': u"{}"}
+ parens = tpmap[node.type.py_type_name()]
+ body = node.loop.body
+ target = node.loop.target
+ sequence = node.loop.iterator.sequence
+ condition = None
+ if hasattr(body, 'if_clauses'):
+ # type(body) is Nodes.IfStatNode
+ condition = body.if_clauses[0].condition
+ body = body.if_clauses[0].body
+ self.emit_comprehension(body, target, sequence, condition, parens)
+
+ def visit_GeneratorExpressionNode(self, node):
+ body = node.loop.body
+ target = node.loop.target
+ sequence = node.loop.iterator.sequence
+ condition = None
+ if hasattr(body, 'if_clauses'):
+ # type(body) is Nodes.IfStatNode
+ condition = body.if_clauses[0].condition
+ body = body.if_clauses[0].body.expr.arg
+ elif hasattr(body, 'expr'):
+ # type(body) is Nodes.ExprStatNode
+ body = body.expr.arg
+ self.emit_comprehension(body, target, sequence, condition, u"()")
diff -Nru cython-0.26.1/Cython/Compiler/Annotate.py cython-0.29.14/Cython/Compiler/Annotate.py
--- cython-0.26.1/Cython/Compiler/Annotate.py 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Annotate.py 2018-09-22 14:18:56.000000000 +0000
@@ -79,14 +79,6 @@
css.append(HtmlFormatter().get_style_defs('.cython'))
return '\n'.join(css)
- _js = """
- function toggleDiv(id) {
- theDiv = id.nextElementSibling
- if (theDiv.style.display != 'block') theDiv.style.display = 'block';
- else theDiv.style.display = 'none';
- }
- """.strip()
-
_css_template = textwrap.dedent("""
body.cython { font-family: courier; font-size: 12; }
@@ -114,6 +106,14 @@
.cython.code .c_call { color: #0000FF; }
""")
+ # on-click toggle function to show/hide C source code
+ _onclick_attr = ' onclick="{0}"'.format((
+ "(function(s){"
+ " s.display = s.display === 'block' ? 'none' : 'block'"
+ "})(this.nextElementSibling.style)"
+ ).replace(' ', '') # poor dev's JS minification
+ )
+
def save_annotation(self, source_filename, target_filename, coverage_xml=None):
with Utils.open_source_file(source_filename) as f:
code = f.read()
@@ -141,9 +141,6 @@
-
Generated by Cython {watermark} {more_info}
@@ -151,7 +148,7 @@
Yellow lines hint at Python interaction.
Click on a line that starts with a "+
" to see the C code that Cython generated for it.
- ''').format(css=self._css(), js=self._js, watermark=Version.watermark,
+ ''').format(css=self._css(), watermark=Version.watermark,
filename=os.path.basename(source_filename) if source_filename else '',
more_info=coverage_info)
]
@@ -253,7 +250,7 @@
calls['py_macro_api'] + calls['pyx_macro_api'])
if c_code:
- onclick = " onclick='toggleDiv(this)'"
+ onclick = self._onclick_attr
expandsymbol = '+'
else:
onclick = ''
diff -Nru cython-0.26.1/Cython/Compiler/AutoDocTransforms.py cython-0.29.14/Cython/Compiler/AutoDocTransforms.py
--- cython-0.26.1/Cython/Compiler/AutoDocTransforms.py 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/AutoDocTransforms.py 2018-09-22 14:18:56.000000000 +0000
@@ -1,89 +1,59 @@
-from __future__ import absolute_import
+from __future__ import absolute_import, print_function
from .Visitor import CythonTransform
from .StringEncoding import EncodedString
from . import Options
from . import PyrexTypes, ExprNodes
+from ..CodeWriter import ExpressionWriter
+
+
+class AnnotationWriter(ExpressionWriter):
+
+ def visit_Node(self, node):
+ self.put(u"??>")
+
+ def visit_LambdaNode(self, node):
+ # XXX Should we do better?
+ self.put("")
+
class EmbedSignature(CythonTransform):
def __init__(self, context):
super(EmbedSignature, self).__init__(context)
- self.denv = None # XXX
self.class_name = None
self.class_node = None
- unop_precedence = 11
- binop_precedence = {
- 'or': 1,
- 'and': 2,
- 'not': 3,
- 'in': 4, 'not in': 4, 'is': 4, 'is not': 4, '<': 4, '<=': 4, '>': 4, '>=': 4, '!=': 4, '==': 4,
- '|': 5,
- '^': 6,
- '&': 7,
- '<<': 8, '>>': 8,
- '+': 9, '-': 9,
- '*': 10, '/': 10, '//': 10, '%': 10,
- # unary: '+': 11, '-': 11, '~': 11
- '**': 12}
-
- def _fmt_expr_node(self, node, precedence=0):
- if isinstance(node, ExprNodes.BinopNode) and not node.inplace:
- new_prec = self.binop_precedence.get(node.operator, 0)
- result = '%s %s %s' % (self._fmt_expr_node(node.operand1, new_prec),
- node.operator,
- self._fmt_expr_node(node.operand2, new_prec))
- if precedence > new_prec:
- result = '(%s)' % result
- elif isinstance(node, ExprNodes.UnopNode):
- result = '%s%s' % (node.operator,
- self._fmt_expr_node(node.operand, self.unop_precedence))
- if precedence > self.unop_precedence:
- result = '(%s)' % result
- elif isinstance(node, ExprNodes.AttributeNode):
- result = '%s.%s' % (self._fmt_expr_node(node.obj), node.attribute)
- else:
- result = node.name
+ def _fmt_expr(self, node):
+ writer = AnnotationWriter()
+ result = writer.write(node)
+ # print(type(node).__name__, '-->', result)
return result
- def _fmt_arg_defv(self, arg):
- default_val = arg.default
- if not default_val:
- return None
- if isinstance(default_val, ExprNodes.NullNode):
- return 'NULL'
- try:
- denv = self.denv # XXX
- ctval = default_val.compile_time_value(self.denv)
- repr_val = repr(ctval)
- if isinstance(default_val, ExprNodes.UnicodeNode):
- if repr_val[:1] != 'u':
- return u'u%s' % repr_val
- elif isinstance(default_val, ExprNodes.BytesNode):
- if repr_val[:1] != 'b':
- return u'b%s' % repr_val
- elif isinstance(default_val, ExprNodes.StringNode):
- if repr_val[:1] in 'ub':
- return repr_val[1:]
- return repr_val
- except Exception:
- try:
- return self._fmt_expr_node(default_val)
- except AttributeError:
- return '??>'
-
def _fmt_arg(self, arg):
if arg.type is PyrexTypes.py_object_type or arg.is_self_arg:
doc = arg.name
else:
doc = arg.type.declaration_code(arg.name, for_display=1)
- if arg.default:
- arg_defv = self._fmt_arg_defv(arg)
- if arg_defv:
- doc = doc + ('=%s' % arg_defv)
+
+ if arg.annotation:
+ annotation = self._fmt_expr(arg.annotation)
+ doc = doc + (': %s' % annotation)
+ if arg.default:
+ default = self._fmt_expr(arg.default)
+ doc = doc + (' = %s' % default)
+ elif arg.default:
+ default = self._fmt_expr(arg.default)
+ doc = doc + ('=%s' % default)
return doc
+ def _fmt_star_arg(self, arg):
+ arg_doc = arg.name
+ if arg.annotation:
+ annotation = self._fmt_expr(arg.annotation)
+ arg_doc = arg_doc + (': %s' % annotation)
+ return arg_doc
+
def _fmt_arglist(self, args,
npargs=0, pargs=None,
nkargs=0, kargs=None,
@@ -94,11 +64,13 @@
arg_doc = self._fmt_arg(arg)
arglist.append(arg_doc)
if pargs:
- arglist.insert(npargs, '*%s' % pargs.name)
+ arg_doc = self._fmt_star_arg(pargs)
+ arglist.insert(npargs, '*%s' % arg_doc)
elif nkargs:
arglist.insert(npargs, '*')
if kargs:
- arglist.append('**%s' % kargs.name)
+ arg_doc = self._fmt_star_arg(kargs)
+ arglist.append('**%s' % arg_doc)
return arglist
def _fmt_ret_type(self, ret):
@@ -110,6 +82,7 @@
def _fmt_signature(self, cls_name, func_name, args,
npargs=0, pargs=None,
nkargs=0, kargs=None,
+ return_expr=None,
return_type=None, hide_self=False):
arglist = self._fmt_arglist(args,
npargs, pargs,
@@ -119,10 +92,13 @@
func_doc = '%s(%s)' % (func_name, arglist_doc)
if cls_name:
func_doc = '%s.%s' % (cls_name, func_doc)
- if return_type:
+ ret_doc = None
+ if return_expr:
+ ret_doc = self._fmt_expr(return_expr)
+ elif return_type:
ret_doc = self._fmt_ret_type(return_type)
- if ret_doc:
- func_doc = '%s -> %s' % (func_doc, ret_doc)
+ if ret_doc:
+ func_doc = '%s -> %s' % (func_doc, ret_doc)
return func_doc
def _embed_signature(self, signature, node_doc):
@@ -177,6 +153,7 @@
class_name, func_name, node.args,
npargs, node.star_arg,
nkargs, node.starstar_arg,
+ return_expr=node.return_type_annotation,
return_type=None, hide_self=hide_self)
if signature:
if is_constructor:
diff -Nru cython-0.26.1/Cython/Compiler/Buffer.py cython-0.29.14/Cython/Compiler/Buffer.py
--- cython-0.26.1/Cython/Compiler/Buffer.py 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Buffer.py 2018-09-22 14:18:56.000000000 +0000
@@ -36,7 +36,6 @@
if self.buffers_exists:
use_bufstruct_declare_code(node.scope)
use_py2_buffer_functions(node.scope)
- node.scope.use_utility_code(empty_bufstruct_utility)
return result
@@ -317,8 +316,8 @@
code.putln("%s.data = NULL;" % pybuffernd_struct)
code.putln("%s.rcbuffer = &%s;" % (pybuffernd_struct, pybuffer_struct))
+
def put_acquire_arg_buffer(entry, code, pos):
- code.globalstate.use_utility_code(acquire_utility_code)
buffer_aux = entry.buffer_aux
getbuffer = get_getbuffer_call(code, entry.cname, buffer_aux, entry.type)
@@ -327,14 +326,16 @@
code.putln("__Pyx_BufFmt_StackElem __pyx_stack[%d];" % entry.type.dtype.struct_nesting_depth())
code.putln(code.error_goto_if("%s == -1" % getbuffer, pos))
code.putln("}")
- # An exception raised in arg parsing cannot be catched, so no
+ # An exception raised in arg parsing cannot be caught, so no
# need to care about the buffer then.
put_unpack_buffer_aux_into_scope(entry, code)
+
def put_release_buffer_code(code, entry):
code.globalstate.use_utility_code(acquire_utility_code)
code.putln("__Pyx_SafeReleaseBuffer(&%s.rcbuffer->pybuffer);" % entry.buffer_aux.buflocal_nd_var.cname)
+
def get_getbuffer_call(code, obj_cname, buffer_aux, buffer_type):
ndim = buffer_type.ndim
cast = int(buffer_type.cast)
@@ -343,10 +344,12 @@
dtype_typeinfo = get_type_information_cname(code, buffer_type.dtype)
+ code.globalstate.use_utility_code(acquire_utility_code)
return ("__Pyx_GetBufferAndValidate(&%(pybuffernd_struct)s.rcbuffer->pybuffer, "
"(PyObject*)%(obj_cname)s, &%(dtype_typeinfo)s, %(flags)s, %(ndim)d, "
"%(cast)d, __pyx_stack)" % locals())
+
def put_assign_to_buffer(lhs_cname, rhs_cname, buf_entry,
is_initialized, pos, code):
"""
@@ -364,11 +367,10 @@
"""
buffer_aux, buffer_type = buf_entry.buffer_aux, buf_entry.type
- code.globalstate.use_utility_code(acquire_utility_code)
pybuffernd_struct = buffer_aux.buflocal_nd_var.cname
flags = get_flags(buffer_aux, buffer_type)
- code.putln("{") # Set up necesarry stack for getbuffer
+ code.putln("{") # Set up necessary stack for getbuffer
code.putln("__Pyx_BufFmt_StackElem __pyx_stack[%d];" % buffer_type.dtype.struct_nesting_depth())
getbuffer = get_getbuffer_call(code, "%s", buffer_aux, buffer_type) # fill in object below
@@ -384,18 +386,19 @@
# before raising the exception. A failure of reacquisition
# will cause the reacquisition exception to be reported, one
# can consider working around this later.
- type, value, tb = [code.funcstate.allocate_temp(PyrexTypes.py_object_type, manage_ref=False)
- for i in range(3)]
- code.putln('PyErr_Fetch(&%s, &%s, &%s);' % (type, value, tb))
+ exc_temps = tuple(code.funcstate.allocate_temp(PyrexTypes.py_object_type, manage_ref=False)
+ for _ in range(3))
+ code.putln('PyErr_Fetch(&%s, &%s, &%s);' % exc_temps)
code.putln('if (%s) {' % code.unlikely("%s == -1" % (getbuffer % lhs_cname)))
- code.putln('Py_XDECREF(%s); Py_XDECREF(%s); Py_XDECREF(%s);' % (type, value, tb)) # Do not refnanny these!
+ code.putln('Py_XDECREF(%s); Py_XDECREF(%s); Py_XDECREF(%s);' % exc_temps) # Do not refnanny these!
code.globalstate.use_utility_code(raise_buffer_fallback_code)
code.putln('__Pyx_RaiseBufferFallbackError();')
code.putln('} else {')
- code.putln('PyErr_Restore(%s, %s, %s);' % (type, value, tb))
- for t in (type, value, tb):
- code.funcstate.release_temp(t)
+ code.putln('PyErr_Restore(%s, %s, %s);' % exc_temps)
code.putln('}')
+ code.putln('%s = %s = %s = 0;' % exc_temps)
+ for t in exc_temps:
+ code.funcstate.release_temp(t)
code.putln('}')
# Unpack indices
put_unpack_buffer_aux_into_scope(buf_entry, code)
@@ -489,15 +492,6 @@
env.use_utility_code(buffer_struct_declare_code)
-def get_empty_bufstruct_code(max_ndim):
- code = dedent("""
- static Py_ssize_t __Pyx_zeros[] = {%s};
- static Py_ssize_t __Pyx_minusones[] = {%s};
- """) % (", ".join(["0"] * max_ndim), ", ".join(["-1"] * max_ndim))
- return UtilityCode(proto=code)
-
-empty_bufstruct_utility = get_empty_bufstruct_code(Options.buffer_max_dims)
-
def buf_lookup_full_code(proto, defin, name, nd):
"""
Generates a buffer lookup function for the right number
@@ -518,6 +512,7 @@
""") % (i, i, i, i) for i in range(nd)]
) + "\nreturn ptr;\n}")
+
def buf_lookup_strided_code(proto, defin, name, nd):
"""
Generates a buffer lookup function for the right number
@@ -528,6 +523,7 @@
offset = " + ".join(["i%d * s%d" % (i, i) for i in range(nd)])
proto.putln("#define %s(type, buf, %s) (type)((char*)buf + %s)" % (name, args, offset))
+
def buf_lookup_c_code(proto, defin, name, nd):
"""
Similar to strided lookup, but can assume that the last dimension
@@ -541,6 +537,7 @@
offset = " + ".join(["i%d * s%d" % (i, i) for i in range(nd - 1)])
proto.putln("#define %s(type, buf, %s) ((type)((char*)buf + %s) + i%d)" % (name, args, offset, nd - 1))
+
def buf_lookup_fortran_code(proto, defin, name, nd):
"""
Like C lookup, but the first index is optimized instead.
@@ -556,6 +553,7 @@
def use_py2_buffer_functions(env):
env.use_utility_code(GetAndReleaseBufferUtilityCode())
+
class GetAndReleaseBufferUtilityCode(object):
# Emulation of PyObject_GetBuffer and PyBuffer_Release for Python 2.
# For >= 2.6 we do double mode -- use the new buffer interface on objects
@@ -619,7 +617,7 @@
def mangle_dtype_name(dtype):
- # Use prefixes to seperate user defined types from builtins
+ # Use prefixes to separate user defined types from builtins
# (consider "typedef float unsigned_int")
if dtype.is_pyobject:
return "object"
@@ -638,7 +636,7 @@
and return the name of the type info struct.
Structs with two floats of the same size are encoded as complex numbers.
- One can seperate between complex numbers declared as struct or with native
+ One can separate between complex numbers declared as struct or with native
encoding by inspecting to see if the fields field of the type is
filled in.
"""
@@ -723,26 +721,18 @@
else:
return TempitaUtilityCode.load(util_code_name, "Buffer.c", context=context, **kwargs)
-context = dict(max_dims=str(Options.buffer_max_dims))
-buffer_struct_declare_code = load_buffer_utility("BufferStructDeclare",
- context=context)
-
+context = dict(max_dims=Options.buffer_max_dims)
+buffer_struct_declare_code = load_buffer_utility("BufferStructDeclare", context=context)
+buffer_formats_declare_code = load_buffer_utility("BufferFormatStructs")
# Utility function to set the right exception
# The caller should immediately goto_error
raise_indexerror_code = load_buffer_utility("BufferIndexError")
raise_indexerror_nogil = load_buffer_utility("BufferIndexErrorNogil")
-
raise_buffer_fallback_code = load_buffer_utility("BufferFallbackError")
-buffer_structs_code = load_buffer_utility(
- "BufferFormatStructs", proto_block='utility_code_proto_before_types')
-acquire_utility_code = load_buffer_utility("BufferFormatCheck",
- context=context,
- requires=[buffer_structs_code,
- UtilityCode.load_cached("IsLittleEndian", "ModuleSetupCode.c")])
+
+acquire_utility_code = load_buffer_utility("BufferGetAndValidate", context=context)
+buffer_format_check_code = load_buffer_utility("BufferFormatCheck", context=context)
# See utility code BufferFormatFromTypeInfo
-_typeinfo_to_format_code = load_buffer_utility("TypeInfoToFormat", context={},
- requires=[buffer_structs_code])
-typeinfo_compare_code = load_buffer_utility("TypeInfoCompare", context={},
- requires=[buffer_structs_code])
+_typeinfo_to_format_code = load_buffer_utility("TypeInfoToFormat")
diff -Nru cython-0.26.1/Cython/Compiler/Builtin.py cython-0.29.14/Cython/Compiler/Builtin.py
--- cython-0.26.1/Cython/Compiler/Builtin.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Builtin.py 2018-09-22 14:18:56.000000000 +0000
@@ -95,16 +95,24 @@
is_strict_signature = True),
BuiltinFunction('abs', "f", "f", "fabsf",
is_strict_signature = True),
+ BuiltinFunction('abs', "i", "i", "abs",
+ is_strict_signature = True),
+ BuiltinFunction('abs', "l", "l", "labs",
+ is_strict_signature = True),
+ BuiltinFunction('abs', None, None, "__Pyx_abs_longlong",
+ utility_code = UtilityCode.load("abs_longlong", "Builtins.c"),
+ func_type = PyrexTypes.CFuncType(
+ PyrexTypes.c_longlong_type, [
+ PyrexTypes.CFuncTypeArg("arg", PyrexTypes.c_longlong_type, None)
+ ],
+ is_strict_signature = True, nogil=True)),
] + list(
- # uses getattr to get PyrexTypes.c_uint_type etc to allow easy iteration over a list
- BuiltinFunction('abs', None, None, "__Pyx_abs_{0}".format(t),
- utility_code = UtilityCode.load("abs_{0}".format(t), "Builtins.c"),
+ BuiltinFunction('abs', None, None, "/*abs_{0}*/".format(t.specialization_name()),
func_type = PyrexTypes.CFuncType(
- getattr(PyrexTypes,"c_u{0}_type".format(t)), [
- PyrexTypes.CFuncTypeArg("arg", getattr(PyrexTypes,"c_{0}_type".format(t)), None)
- ],
+ t,
+ [PyrexTypes.CFuncTypeArg("arg", t, None)],
is_strict_signature = True, nogil=True))
- for t in ("int", "long", "longlong")
+ for t in (PyrexTypes.c_uint_type, PyrexTypes.c_ulong_type, PyrexTypes.c_ulonglong_type)
) + list(
BuiltinFunction('abs', None, None, "__Pyx_c_abs{0}".format(t.funcsuffix),
func_type = PyrexTypes.CFuncType(
@@ -116,7 +124,8 @@
PyrexTypes.c_double_complex_type,
PyrexTypes.c_longdouble_complex_type)
) + [
- BuiltinFunction('abs', "O", "O", "PyNumber_Absolute"),
+ BuiltinFunction('abs', "O", "O", "__Pyx_PyNumber_Absolute",
+ utility_code=UtilityCode.load("py_abs", "Builtins.c")),
#('all', "", "", ""),
#('any', "", "", ""),
#('ascii', "", "", ""),
@@ -320,7 +329,10 @@
("set", "PySet_Type", [BuiltinMethod("__contains__", "TO", "b", "PySequence_Contains"),
BuiltinMethod("clear", "T", "r", "PySet_Clear"),
# discard() and remove() have a special treatment for unhashable values
-# BuiltinMethod("discard", "TO", "r", "PySet_Discard"),
+ BuiltinMethod("discard", "TO", "r", "__Pyx_PySet_Discard",
+ utility_code=UtilityCode.load("py_set_discard", "Optimize.c")),
+ BuiltinMethod("remove", "TO", "r", "__Pyx_PySet_Remove",
+ utility_code=UtilityCode.load("py_set_remove", "Optimize.c")),
# update is actually variadic (see Github issue #1645)
# BuiltinMethod("update", "TO", "r", "__Pyx_PySet_Update",
# utility_code=UtilityCode.load_cached("PySet_Update", "Builtins.c")),
@@ -380,6 +392,8 @@
utility = builtin_utility_code.get(name)
if name == 'frozenset':
objstruct_cname = 'PySetObject'
+ elif name == 'bytearray':
+ objstruct_cname = 'PyByteArrayObject'
elif name == 'bool':
objstruct_cname = None
elif name == 'Exception':
diff -Nru cython-0.26.1/Cython/Compiler/CmdLine.py cython-0.29.14/Cython/Compiler/CmdLine.py
--- cython-0.26.1/Cython/Compiler/CmdLine.py 2015-09-10 16:25:36.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/CmdLine.py 2018-11-24 09:20:06.000000000 +0000
@@ -40,6 +40,8 @@
--embed[=] Generate a main() function that embeds the Python interpreter.
-2 Compile based on Python-2 syntax and code semantics.
-3 Compile based on Python-3 syntax and code semantics.
+ --3str Compile based on Python-3 syntax and code semantics without
+ assuming unicode by default for string literals under Python 2.
--lenient Change some compile time errors to runtime errors to
improve Python compatibility
--capi-reexport-cincludes Add cincluded headers to any auto-generated header files.
@@ -47,10 +49,11 @@
--warning-errors, -Werror Make all warnings into errors
--warning-extra, -Wextra Enable extra warnings
-X, --directive =[,= 3', 'input'),
}
+ctypedef_builtins_map = {
+ # types of builtins in "ctypedef class" statements which we don't
+ # import either because the names conflict with C types or because
+ # the type simply is not exposed.
+ 'py_int' : '&PyInt_Type',
+ 'py_long' : '&PyLong_Type',
+ 'py_float' : '&PyFloat_Type',
+ 'wrapper_descriptor' : '&PyWrapperDescr_Type',
+}
+
basicsize_builtins_map = {
# builtins whose type has a different tp_basicsize than sizeof(...)
'PyTypeObject': 'PyHeapTypeObject',
}
uncachable_builtins = [
- # builtin names that cannot be cached because they may or may not
- # be available at import time
+ # Global/builtin names that cannot be cached because they may or may not
+ # be available at import time, for various reasons:
+ ## - Py3.7+
+ 'breakpoint', # might deserve an implementation in Cython
+ ## - Py3.4+
+ '__loader__',
+ '__spec__',
+ ## - Py3+
+ 'BlockingIOError',
+ 'BrokenPipeError',
+ 'ChildProcessError',
+ 'ConnectionAbortedError',
+ 'ConnectionError',
+ 'ConnectionRefusedError',
+ 'ConnectionResetError',
+ 'FileExistsError',
+ 'FileNotFoundError',
+ 'InterruptedError',
+ 'IsADirectoryError',
+ 'ModuleNotFoundError',
+ 'NotADirectoryError',
+ 'PermissionError',
+ 'ProcessLookupError',
+ 'RecursionError',
+ 'ResourceWarning',
+ #'StopAsyncIteration', # backported
+ 'TimeoutError',
+ '__build_class__',
+ 'ascii', # might deserve an implementation in Cython
+ #'exec', # implemented in Cython
+ ## - Py2.7+
+ 'memoryview',
+ ## - platform specific
'WindowsError',
- '_', # e.g. gettext
+ ## - others
+ '_', # e.g. used by gettext
]
special_py_methods = set([
@@ -76,7 +120,81 @@
'inline': 'CYTHON_INLINE'
}.get
-is_self_assignment = re.compile(r" *(\w+) = (\1);\s*$").match
+
+class IncludeCode(object):
+ """
+ An include file and/or verbatim C code to be included in the
+ generated sources.
+ """
+ # attributes:
+ #
+ # pieces {order: unicode}: pieces of C code to be generated.
+ # For the included file, the key "order" is zero.
+ # For verbatim include code, the "order" is the "order"
+ # attribute of the original IncludeCode where this piece
+ # of C code was first added. This is needed to prevent
+ # duplication if the same include code is found through
+ # multiple cimports.
+ # location int: where to put this include in the C sources, one
+ # of the constants INITIAL, EARLY, LATE
+ # order int: sorting order (automatically set by increasing counter)
+
+ # Constants for location. If the same include occurs with different
+ # locations, the earliest one takes precedense.
+ INITIAL = 0
+ EARLY = 1
+ LATE = 2
+
+ counter = 1 # Counter for "order"
+
+ def __init__(self, include=None, verbatim=None, late=True, initial=False):
+ self.order = self.counter
+ type(self).counter += 1
+ self.pieces = {}
+
+ if include:
+ if include[0] == '<' and include[-1] == '>':
+ self.pieces[0] = u'#include {0}'.format(include)
+ late = False # system include is never late
+ else:
+ self.pieces[0] = u'#include "{0}"'.format(include)
+
+ if verbatim:
+ self.pieces[self.order] = verbatim
+
+ if initial:
+ self.location = self.INITIAL
+ elif late:
+ self.location = self.LATE
+ else:
+ self.location = self.EARLY
+
+ def dict_update(self, d, key):
+ """
+ Insert `self` in dict `d` with key `key`. If that key already
+ exists, update the attributes of the existing value with `self`.
+ """
+ if key in d:
+ other = d[key]
+ other.location = min(self.location, other.location)
+ other.pieces.update(self.pieces)
+ else:
+ d[key] = self
+
+ def sortkey(self):
+ return self.order
+
+ def mainpiece(self):
+ """
+ Return the main piece of C code, corresponding to the include
+ file. If there was no include file, return None.
+ """
+ return self.pieces.get(0)
+
+ def write(self, code):
+ # Write values of self.pieces dict, sorted by the keys
+ for k in sorted(self.pieces):
+ code.putln(self.pieces[k])
def get_utility_dir():
@@ -116,7 +234,6 @@
"""
is_cython_utility = False
- requires = None
_utility_cache = {}
@classmethod
@@ -138,19 +255,16 @@
if type == 'proto':
utility[0] = code
- elif type.startswith('proto.'):
- utility[0] = code
- utility[1] = type[6:]
elif type == 'impl':
- utility[2] = code
+ utility[1] = code
else:
- all_tags = utility[3]
+ all_tags = utility[2]
if KEYWORDS_MUST_BE_BYTES:
type = type.encode('ASCII')
all_tags[type] = code
if tags:
- all_tags = utility[3]
+ all_tags = utility[2]
for name, values in tags.items():
if KEYWORDS_MUST_BE_BYTES:
name = name.encode('ASCII')
@@ -176,12 +290,12 @@
(r'^%(C)s{5,30}\s*(?P(?:\w|\.)+)\s*%(C)s{5,30}|'
r'^%(C)s+@(?P\w+)\s*:\s*(?P(?:\w|[.:])+)') %
{'C': comment}).match
- match_type = re.compile('(.+)[.](proto(?:[.]\S+)?|impl|init|cleanup)$').match
+ match_type = re.compile(r'(.+)[.](proto(?:[.]\S+)?|impl|init|cleanup)$').match
with closing(Utils.open_source_file(filename, encoding='UTF-8')) as f:
all_lines = f.readlines()
- utilities = defaultdict(lambda: [None, None, None, {}])
+ utilities = defaultdict(lambda: [None, None, {}])
lines = []
tags = defaultdict(set)
utility = type = None
@@ -255,7 +369,7 @@
from_file = files[0]
utilities = cls.load_utilities_from_file(from_file)
- proto, proto_block, impl, tags = utilities[util_code_name]
+ proto, impl, tags = utilities[util_code_name]
if tags:
orig_kwargs = kwargs.copy()
@@ -274,13 +388,11 @@
elif not values:
values = None
elif len(values) == 1:
- values = values[0]
+ values = list(values)[0]
kwargs[name] = values
if proto is not None:
kwargs['proto'] = proto
- if proto_block is not None:
- kwargs['proto_block'] = proto_block
if impl is not None:
kwargs['impl'] = impl
@@ -327,6 +439,10 @@
def get_tree(self, **kwargs):
pass
+ def __deepcopy__(self, memodict=None):
+ # No need to deep-copy utility code since it's essentially immutable.
+ return self
+
class UtilityCode(UtilityCodeBase):
"""
@@ -337,7 +453,7 @@
hashes/equals by instance
proto C prototypes
- impl implemenation code
+ impl implementation code
init code to call on module initialization
requires utility code dependencies
proto_block the place in the resulting file where the prototype should
@@ -411,21 +527,22 @@
def inject_string_constants(self, impl, output):
"""Replace 'PYIDENT("xyz")' by a constant Python identifier cname.
"""
- if 'PYIDENT(' not in impl:
+ if 'PYIDENT(' not in impl and 'PYUNICODE(' not in impl:
return False, impl
replacements = {}
def externalise(matchobj):
- name = matchobj.group(1)
+ key = matchobj.groups()
try:
- cname = replacements[name]
+ cname = replacements[key]
except KeyError:
- cname = replacements[name] = output.get_interned_identifier(
- StringEncoding.EncodedString(name)).cname
+ str_type, name = key
+ cname = replacements[key] = output.get_py_string_const(
+ StringEncoding.EncodedString(name), identifier=str_type == 'IDENT').cname
return cname
- impl = re.sub(r'PYIDENT\("([^"]+)"\)', externalise, impl)
- assert 'PYIDENT(' not in impl
+ impl = re.sub(r'PY(IDENT|UNICODE)\("([^"]+)"\)', externalise, impl)
+ assert 'PYIDENT(' not in impl and 'PYUNICODE(' not in impl
return bool(replacements), impl
def inject_unbound_methods(self, impl, output):
@@ -436,21 +553,18 @@
utility_code = set()
def externalise(matchobj):
- type_cname, method_name, args = matchobj.groups()
- args = [arg.strip() for arg in args[1:].split(',')]
- if len(args) == 1:
- call = '__Pyx_CallUnboundCMethod0'
- utility_code.add("CallUnboundCMethod0")
- elif len(args) == 2:
- call = '__Pyx_CallUnboundCMethod1'
- utility_code.add("CallUnboundCMethod1")
- else:
- assert False, "CALL_UNBOUND_METHOD() requires 1 or 2 call arguments"
-
- cname = output.get_cached_unbound_method(type_cname, method_name, len(args))
- return '%s(&%s, %s)' % (call, cname, ', '.join(args))
-
- impl = re.sub(r'CALL_UNBOUND_METHOD\(([a-zA-Z_]+),\s*"([^"]+)"((?:,\s*[^),]+)+)\)', externalise, impl)
+ type_cname, method_name, obj_cname, args = matchobj.groups()
+ args = [arg.strip() for arg in args[1:].split(',')] if args else []
+ assert len(args) < 3, "CALL_UNBOUND_METHOD() does not support %d call arguments" % len(args)
+ return output.cached_unbound_method_call_code(obj_cname, type_cname, method_name, args)
+
+ impl = re.sub(
+ r'CALL_UNBOUND_METHOD\('
+ r'([a-zA-Z_]+),' # type cname
+ r'\s*"([^"]+)",' # method name
+ r'\s*([^),]+)' # object cname
+ r'((?:,\s*[^),]+)*)' # args*
+ r'\)', externalise, impl)
assert 'CALL_UNBOUND_METHOD(' not in impl
for helper in sorted(utility_code):
@@ -564,6 +678,7 @@
available. Useful when you only have 'env' but not 'code'.
"""
__name__ = ''
+ requires = None
def __init__(self, callback):
self.callback = callback
@@ -602,6 +717,7 @@
self.in_try_finally = 0
self.exc_vars = None
+ self.current_except = None
self.can_trace = False
self.gil_owned = True
@@ -632,8 +748,8 @@
label += '_' + name
return label
- def new_yield_label(self):
- label = self.new_label('resume_from_yield')
+ def new_yield_label(self, expr_type='yield'):
+ label = self.new_label('resume_from_%s' % expr_type)
num_and_label = (len(self.yield_labels) + 1, label)
self.yield_labels.append(num_and_label)
return num_and_label
@@ -790,9 +906,11 @@
try-except and try-finally blocks to clean up temps in the
error case.
"""
- return [(cname, type)
- for (type, manage_ref), freelist in self.temps_free.items() if manage_ref
- for cname in freelist[0]]
+ return sorted([ # Enforce deterministic order.
+ (cname, type)
+ for (type, manage_ref), freelist in self.temps_free.items() if manage_ref
+ for cname in freelist[0]
+ ])
def start_collecting_temps(self):
"""
@@ -988,6 +1106,7 @@
'global_var',
'string_decls',
'decls',
+ 'late_includes',
'all_the_rest',
'pystring_table',
'cached_builtins',
@@ -1017,10 +1136,12 @@
self.const_cnames_used = {}
self.string_const_index = {}
+ self.dedup_const_index = {}
self.pyunicode_ptr_const_index = {}
self.num_const_index = {}
self.py_constants = []
self.cached_cmethods = {}
+ self.initialised_constants = set()
writer.set_global_state(self)
self.rootwriter = writer
@@ -1035,19 +1156,19 @@
else:
w = self.parts['cached_builtins']
w.enter_cfunc_scope()
- w.putln("static int __Pyx_InitCachedBuiltins(void) {")
+ w.putln("static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) {")
w = self.parts['cached_constants']
w.enter_cfunc_scope()
w.putln("")
- w.putln("static int __Pyx_InitCachedConstants(void) {")
+ w.putln("static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) {")
w.put_declare_refcount_context()
w.put_setup_refcount_context("__Pyx_InitCachedConstants")
w = self.parts['init_globals']
w.enter_cfunc_scope()
w.putln("")
- w.putln("static int __Pyx_InitGlobals(void) {")
+ w.putln("static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) {")
if not Options.generate_cleanup_code:
del self.parts['cleanup_globals']
@@ -1055,7 +1176,7 @@
w = self.parts['cleanup_globals']
w.enter_cfunc_scope()
w.putln("")
- w.putln("static void __Pyx_CleanupGlobals(void) {")
+ w.putln("static CYTHON_SMALL_CODE void __Pyx_CleanupGlobals(void) {")
code = self.parts['utility_code_proto']
code.putln("")
@@ -1130,7 +1251,12 @@
# constant handling at code generation time
- def get_cached_constants_writer(self):
+ def get_cached_constants_writer(self, target=None):
+ if target is not None:
+ if target in self.initialised_constants:
+ # Return None on second/later calls to prevent duplicate creation code.
+ return None
+ self.initialised_constants.add(target)
return self.parts['cached_constants']
def get_int_const(self, str_value, longness=False):
@@ -1148,13 +1274,19 @@
c = self.new_num_const(str_value, 'float', value_code)
return c
- def get_py_const(self, type, prefix='', cleanup_level=None):
+ def get_py_const(self, type, prefix='', cleanup_level=None, dedup_key=None):
+ if dedup_key is not None:
+ const = self.dedup_const_index.get(dedup_key)
+ if const is not None:
+ return const
# create a new Python object constant
const = self.new_py_const(type, prefix)
if cleanup_level is not None \
and cleanup_level <= Options.generate_cleanup_code:
cleanup_writer = self.parts['cleanup_globals']
cleanup_writer.putln('Py_CLEAR(%s);' % const.cname)
+ if dedup_key is not None:
+ self.dedup_const_index[dedup_key] = const
return const
def get_string_const(self, text, py_version=None):
@@ -1242,8 +1374,8 @@
prefix = Naming.const_prefix
return "%s%s" % (prefix, name_suffix)
- def get_cached_unbound_method(self, type_cname, method_name, args_count):
- key = (type_cname, method_name, args_count)
+ def get_cached_unbound_method(self, type_cname, method_name):
+ key = (type_cname, method_name)
try:
cname = self.cached_cmethods[key]
except KeyError:
@@ -1251,6 +1383,18 @@
'umethod', '%s_%s' % (type_cname, method_name))
return cname
+ def cached_unbound_method_call_code(self, obj_cname, type_cname, method_name, arg_cnames):
+ # admittedly, not the best place to put this method, but it is reused by UtilityCode and ExprNodes ...
+ utility_code_name = "CallUnboundCMethod%d" % len(arg_cnames)
+ self.use_utility_code(UtilityCode.load_cached(utility_code_name, "ObjectHandling.c"))
+ cache_cname = self.get_cached_unbound_method(type_cname, method_name)
+ args = [obj_cname] + arg_cnames
+ return "__Pyx_%s(&%s, %s)" % (
+ utility_code_name,
+ cache_cname,
+ ', '.join(args),
+ )
+
def add_cached_builtin_decl(self, entry):
if entry.is_builtin and entry.is_const:
if self.should_declare(entry.cname, entry):
@@ -1303,7 +1447,7 @@
decl = self.parts['decls']
init = self.parts['init_globals']
cnames = []
- for (type_cname, method_name, _), cname in sorted(self.cached_cmethods.items()):
+ for (type_cname, method_name), cname in sorted(self.cached_cmethods.items()):
cnames.append(cname)
method_name_cname = self.get_interned_identifier(StringEncoding.EncodedString(method_name)).cname
decl.putln('static __Pyx_CachedCFunction %s = {0, &%s, 0, 0, 0};' % (
@@ -1493,7 +1637,8 @@
self.use_utility_code(entry.utility_code_definition)
-def funccontext_property(name):
+def funccontext_property(func):
+ name = func.__name__
attribute_of = operator.attrgetter(name)
def get(self):
return attribute_of(self.funcstate)
@@ -1523,7 +1668,7 @@
as well
- labels, temps, exc_vars: One must construct a scope in which these can
exist by calling enter_cfunc_scope/exit_cfunc_scope (these are for
- sanity checking and forward compatabilty). Created insertion points
+ sanity checking and forward compatibility). Created insertion points
looses this scope and cannot access it.
- marker: Not copied to insertion point
- filename_table, filename_list, input_file_contents: All codewriters
@@ -1544,8 +1689,7 @@
# about the current class one is in
# code_config CCodeConfig configuration options for the C code writer
- globalstate = code_config = None
-
+ @cython.locals(create_from='CCodeWriter')
def __init__(self, create_from=None, buffer=None, copy_formatting=False):
if buffer is None: buffer = StringIOTree()
self.buffer = buffer
@@ -1554,6 +1698,8 @@
self.pyclass_stack = []
self.funcstate = None
+ self.globalstate = None
+ self.code_config = None
self.level = 0
self.call_level = 0
self.bol = 1
@@ -1616,19 +1762,27 @@
self.buffer.insert(writer.buffer)
# Properties delegated to function scope
- label_counter = funccontext_property("label_counter")
- return_label = funccontext_property("return_label")
- error_label = funccontext_property("error_label")
- labels_used = funccontext_property("labels_used")
- continue_label = funccontext_property("continue_label")
- break_label = funccontext_property("break_label")
- return_from_error_cleanup_label = funccontext_property("return_from_error_cleanup_label")
- yield_labels = funccontext_property("yield_labels")
+ @funccontext_property
+ def label_counter(self): pass
+ @funccontext_property
+ def return_label(self): pass
+ @funccontext_property
+ def error_label(self): pass
+ @funccontext_property
+ def labels_used(self): pass
+ @funccontext_property
+ def continue_label(self): pass
+ @funccontext_property
+ def break_label(self): pass
+ @funccontext_property
+ def return_from_error_cleanup_label(self): pass
+ @funccontext_property
+ def yield_labels(self): pass
# Functions delegated to function scope
def new_label(self, name=None): return self.funcstate.new_label(name)
def new_error_label(self): return self.funcstate.new_error_label()
- def new_yield_label(self): return self.funcstate.new_yield_label()
+ def new_yield_label(self, *args): return self.funcstate.new_yield_label(*args)
def get_loop_labels(self): return self.funcstate.get_loop_labels()
def set_loop_labels(self, labels): return self.funcstate.set_loop_labels(labels)
def new_loop_labels(self): return self.funcstate.new_loop_labels()
@@ -1653,8 +1807,8 @@
def get_py_float(self, str_value, value_code):
return self.globalstate.get_float_const(str_value, value_code).cname
- def get_py_const(self, type, prefix='', cleanup_level=None):
- return self.globalstate.get_py_const(type, prefix, cleanup_level).cname
+ def get_py_const(self, type, prefix='', cleanup_level=None, dedup_key=None):
+ return self.globalstate.get_py_const(type, prefix, cleanup_level, dedup_key).cname
def get_string_const(self, text):
return self.globalstate.get_string_const(text).cname
@@ -1676,8 +1830,8 @@
def intern_identifier(self, text):
return self.get_py_string_const(text, identifier=True)
- def get_cached_constants_writer(self):
- return self.globalstate.get_cached_constants_writer()
+ def get_cached_constants_writer(self, target=None):
+ return self.globalstate.get_cached_constants_writer(target)
# code generation
@@ -1744,8 +1898,6 @@
self.put(code)
def put(self, code):
- if is_self_assignment(code):
- return
fix_indent = False
if "{" in code:
dl = code.count("{")
@@ -1916,9 +2068,12 @@
if entry.type.is_pyobject:
self.putln("__Pyx_XGIVEREF(%s);" % self.entry_as_pyobject(entry))
- def put_var_incref(self, entry):
+ def put_var_incref(self, entry, nanny=True):
if entry.type.is_pyobject:
- self.putln("__Pyx_INCREF(%s);" % self.entry_as_pyobject(entry))
+ if nanny:
+ self.putln("__Pyx_INCREF(%s);" % self.entry_as_pyobject(entry))
+ else:
+ self.putln("Py_INCREF(%s);" % self.entry_as_pyobject(entry))
def put_var_xincref(self, entry):
if entry.type.is_pyobject:
@@ -1942,8 +2097,8 @@
self.put_xdecref_memoryviewslice(cname, have_gil=have_gil)
return
- prefix = nanny and '__Pyx' or 'Py'
- X = null_check and 'X' or ''
+ prefix = '__Pyx' if nanny else 'Py'
+ X = 'X' if null_check else ''
if clear:
if clear_before_decref:
@@ -1967,9 +2122,12 @@
if entry.type.is_pyobject:
self.putln("__Pyx_XDECREF(%s);" % self.entry_as_pyobject(entry))
- def put_var_xdecref(self, entry):
+ def put_var_xdecref(self, entry, nanny=True):
if entry.type.is_pyobject:
- self.putln("__Pyx_XDECREF(%s);" % self.entry_as_pyobject(entry))
+ if nanny:
+ self.putln("__Pyx_XDECREF(%s);" % self.entry_as_pyobject(entry))
+ else:
+ self.putln("Py_XDECREF(%s);" % self.entry_as_pyobject(entry))
def put_var_decref_clear(self, entry):
self._put_var_decref_clear(entry, null_check=False)
@@ -2036,7 +2194,7 @@
if entry.in_closure:
self.put_giveref('Py_None')
- def put_pymethoddef(self, entry, term, allow_skip=True):
+ def put_pymethoddef(self, entry, term, allow_skip=True, wrapper_code_writer=None):
if entry.is_special or entry.name == '__getattribute__':
if entry.name not in special_py_methods:
if entry.name == '__getattr__' and not self.globalstate.directives['fast_getattr']:
@@ -2046,22 +2204,38 @@
# that's better than ours.
elif allow_skip:
return
- from .TypeSlots import method_coexist
- if entry.doc:
- doc_code = entry.doc_cname
- else:
- doc_code = 0
+
method_flags = entry.signature.method_flags()
- if method_flags:
- if entry.is_special:
- method_flags += [method_coexist]
- self.putln(
- '{"%s", (PyCFunction)%s, %s, %s}%s' % (
- entry.name,
- entry.func_cname,
- "|".join(method_flags),
- doc_code,
- term))
+ if not method_flags:
+ return
+ if entry.is_special:
+ from . import TypeSlots
+ method_flags += [TypeSlots.method_coexist]
+ func_ptr = wrapper_code_writer.put_pymethoddef_wrapper(entry) if wrapper_code_writer else entry.func_cname
+ # Add required casts, but try not to shadow real warnings.
+ cast = '__Pyx_PyCFunctionFast' if 'METH_FASTCALL' in method_flags else 'PyCFunction'
+ if 'METH_KEYWORDS' in method_flags:
+ cast += 'WithKeywords'
+ if cast != 'PyCFunction':
+ func_ptr = '(void*)(%s)%s' % (cast, func_ptr)
+ self.putln(
+ '{"%s", (PyCFunction)%s, %s, %s}%s' % (
+ entry.name,
+ func_ptr,
+ "|".join(method_flags),
+ entry.doc_cname if entry.doc else '0',
+ term))
+
+ def put_pymethoddef_wrapper(self, entry):
+ func_cname = entry.func_cname
+ if entry.is_special:
+ method_flags = entry.signature.method_flags()
+ if method_flags and 'METH_NOARGS' in method_flags:
+ # Special NOARGS methods really take no arguments besides 'self', but PyCFunction expects one.
+ func_cname = Naming.method_wrapper_prefix + func_cname
+ self.putln("static PyObject *%s(PyObject *self, CYTHON_UNUSED PyObject *arg) {return %s(self);}" % (
+ func_cname, entry.func_cname))
+ return func_cname
# GIL methods
@@ -2138,7 +2312,8 @@
# error handling
def put_error_if_neg(self, pos, value):
-# return self.putln("if (unlikely(%s < 0)) %s" % (value, self.error_goto(pos))) # TODO this path is almost _never_ taken, yet this macro makes is slower!
+ # TODO this path is almost _never_ taken, yet this macro makes is slower!
+ # return self.putln("if (unlikely(%s < 0)) %s" % (value, self.error_goto(pos)))
return self.putln("if (%s < 0) %s" % (value, self.error_goto(pos)))
def put_error_if_unbound(self, pos, entry, in_nogil_context=False):
@@ -2182,6 +2357,8 @@
def error_goto(self, pos):
lbl = self.funcstate.error_label
self.funcstate.use_label(lbl)
+ if pos is None:
+ return 'goto %s;' % lbl
return "__PYX_ERR(%s, %s, %s)" % (
self.lookup_filename(pos[0]),
pos[1],
@@ -2290,6 +2467,7 @@
self.putln(" #define unlikely(x) __builtin_expect(!!(x), 0)")
self.putln("#endif")
+
class PyrexCodeWriter(object):
# f file output file
# level int indentation level
diff -Nru cython-0.26.1/Cython/Compiler/CythonScope.py cython-0.29.14/Cython/Compiler/CythonScope.py
--- cython-0.26.1/Cython/Compiler/CythonScope.py 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/CythonScope.py 2018-09-22 14:18:56.000000000 +0000
@@ -26,6 +26,10 @@
cname='')
entry.in_cinclude = True
+ def is_cpp(self):
+ # Allow C++ utility code in C++ contexts.
+ return self.context.cpp
+
def lookup_type(self, name):
# This function should go away when types are all first-level objects.
type = parse_basic_type(name)
diff -Nru cython-0.26.1/Cython/Compiler/Errors.py cython-0.29.14/Cython/Compiler/Errors.py
--- cython-0.26.1/Cython/Compiler/Errors.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Errors.py 2018-09-22 14:18:56.000000000 +0000
@@ -10,6 +10,7 @@
any_string_type = (bytes, str)
import sys
+from contextlib import contextmanager
from ..Utils import open_new_file
from . import DebugFlags
@@ -228,19 +229,34 @@
error_stack = []
+
def hold_errors():
error_stack.append([])
+
def release_errors(ignore=False):
held_errors = error_stack.pop()
if not ignore:
for err in held_errors:
report_error(err)
+
def held_errors():
return error_stack[-1]
+# same as context manager:
+
+@contextmanager
+def local_errors(ignore=False):
+ errors = []
+ error_stack.append(errors)
+ try:
+ yield errors
+ finally:
+ release_errors(ignore=ignore)
+
+
# this module needs a redesign to support parallel cythonisation, but
# for now, the following works at least in sequential compiler runs
diff -Nru cython-0.26.1/Cython/Compiler/ExprNodes.py cython-0.29.14/Cython/Compiler/ExprNodes.py
--- cython-0.26.1/Cython/Compiler/ExprNodes.py 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/ExprNodes.py 2019-05-28 19:54:08.000000000 +0000
@@ -7,7 +7,7 @@
import cython
cython.declare(error=object, warning=object, warn_once=object, InternalError=object,
CompileError=object, UtilityCode=object, TempitaUtilityCode=object,
- StringEncoding=object, operator=object,
+ StringEncoding=object, operator=object, local_errors=object, report_error=object,
Naming=object, Nodes=object, PyrexTypes=object, py_object_type=object,
list_type=object, tuple_type=object, set_type=object, dict_type=object,
unicode_type=object, str_type=object, bytes_type=object, type_type=object,
@@ -16,18 +16,19 @@
bytearray_type=object, slice_type=object, _py_int_types=object,
IS_PYTHON3=cython.bint)
+import re
import sys
import copy
import os.path
import operator
-from .Errors import error, warning, warn_once, InternalError, CompileError
-from .Errors import hold_errors, release_errors, held_errors, report_error
+from .Errors import (
+ error, warning, InternalError, CompileError, report_error, local_errors)
from .Code import UtilityCode, TempitaUtilityCode
from . import StringEncoding
from . import Naming
from . import Nodes
-from .Nodes import Node, utility_code_for_imports
+from .Nodes import Node, utility_code_for_imports, analyse_type_annotation
from . import PyrexTypes
from .PyrexTypes import py_object_type, c_long_type, typecast, error_type, \
unspecified_type
@@ -42,9 +43,10 @@
from ..Debugging import print_call_chain
from .DebugFlags import debug_disposal_code, debug_temp_alloc, \
debug_coercion
-from .Pythran import to_pythran, is_pythran_supported_type, is_pythran_supported_operation_type, \
- is_pythran_expr, pythran_func_type, pythran_binop_type, pythran_unaryop_type, has_np_pythran, \
- pythran_indexing_code, pythran_indexing_type, is_pythran_supported_node_or_none, pythran_type
+from .Pythran import (to_pythran, is_pythran_supported_type, is_pythran_supported_operation_type,
+ is_pythran_expr, pythran_func_type, pythran_binop_type, pythran_unaryop_type, has_np_pythran,
+ pythran_indexing_code, pythran_indexing_type, is_pythran_supported_node_or_none, pythran_type,
+ pythran_is_numpy_func_supported, pythran_get_func_include_file, pythran_functor)
from .PyrexTypes import PythranExpr
try:
@@ -185,20 +187,70 @@
return item_types.pop()
return None
+
+def make_dedup_key(outer_type, item_nodes):
+ """
+ Recursively generate a deduplication key from a sequence of values.
+ Includes Cython node types to work around the fact that (1, 2.0) == (1.0, 2), for example.
+
+ @param outer_type: The type of the outer container.
+ @param item_nodes: A sequence of constant nodes that will be traversed recursively.
+ @return: A tuple that can be used as a dict key for deduplication.
+ """
+ item_keys = [
+ (py_object_type, None, type(None)) if node is None
+ # For sequences and their "mult_factor", see TupleNode.
+ else make_dedup_key(node.type, [node.mult_factor if node.is_literal else None] + node.args) if node.is_sequence_constructor
+ else make_dedup_key(node.type, (node.start, node.stop, node.step)) if node.is_slice
+ # For constants, look at the Python value type if we don't know the concrete Cython type.
+ else (node.type, node.constant_result,
+ type(node.constant_result) if node.type is py_object_type else None) if node.has_constant_result()
+ else None # something we cannot handle => short-circuit below
+ for node in item_nodes
+ ]
+ if None in item_keys:
+ return None
+ return outer_type, tuple(item_keys)
+
+
+# Returns a block of code to translate the exception,
+# plus a boolean indicating whether to check for Python exceptions.
def get_exception_handler(exception_value):
if exception_value is None:
- return "__Pyx_CppExn2PyErr();"
+ return "__Pyx_CppExn2PyErr();", False
+ elif (exception_value.type == PyrexTypes.c_char_type
+ and exception_value.value == '*'):
+ return "__Pyx_CppExn2PyErr();", True
elif exception_value.type.is_pyobject:
- return 'try { throw; } catch(const std::exception& exn) { PyErr_SetString(%s, exn.what()); } catch(...) { PyErr_SetNone(%s); }' % (
- exception_value.entry.cname,
- exception_value.entry.cname)
+ return (
+ 'try { throw; } catch(const std::exception& exn) {'
+ 'PyErr_SetString(%s, exn.what());'
+ '} catch(...) { PyErr_SetNone(%s); }' % (
+ exception_value.entry.cname,
+ exception_value.entry.cname),
+ False)
else:
- return '%s(); if (!PyErr_Occurred()) PyErr_SetString(PyExc_RuntimeError , "Error converting c++ exception.");' % exception_value.entry.cname
+ return (
+ '%s(); if (!PyErr_Occurred())'
+ 'PyErr_SetString(PyExc_RuntimeError, '
+ '"Error converting c++ exception.");' % (
+ exception_value.entry.cname),
+ False)
+
+def maybe_check_py_error(code, check_py_exception, pos, nogil):
+ if check_py_exception:
+ if nogil:
+ code.putln(code.error_goto_if("__Pyx_ErrOccurredWithGIL()", pos))
+ else:
+ code.putln(code.error_goto_if("PyErr_Occurred()", pos))
-def translate_cpp_exception(code, pos, inside, exception_value, nogil):
- raise_py_exception = get_exception_handler(exception_value)
+def translate_cpp_exception(code, pos, inside, py_result, exception_value, nogil):
+ raise_py_exception, check_py_exception = get_exception_handler(exception_value)
code.putln("try {")
code.putln("%s" % inside)
+ if py_result:
+ code.putln(code.error_goto_if_null(py_result, pos))
+ maybe_check_py_error(code, check_py_exception, pos, nogil)
code.putln("} catch(...) {")
if nogil:
code.put_ensure_gil(declare_gilstate=True)
@@ -212,12 +264,14 @@
# both have an exception declaration.
def translate_double_cpp_exception(code, pos, lhs_type, lhs_code, rhs_code,
lhs_exc_val, assign_exc_val, nogil):
- handle_lhs_exc = get_exception_handler(lhs_exc_val)
- handle_assignment_exc = get_exception_handler(assign_exc_val)
+ handle_lhs_exc, lhc_check_py_exc = get_exception_handler(lhs_exc_val)
+ handle_assignment_exc, assignment_check_py_exc = get_exception_handler(assign_exc_val)
code.putln("try {")
code.putln(lhs_type.declaration_code("__pyx_local_lvalue = %s;" % lhs_code))
+ maybe_check_py_error(code, lhc_check_py_exc, pos, nogil)
code.putln("try {")
code.putln("__pyx_local_lvalue = %s;" % rhs_code)
+ maybe_check_py_error(code, assignment_check_py_exc, pos, nogil)
# Catch any exception from the overloaded assignment.
code.putln("} catch(...) {")
if nogil:
@@ -254,9 +308,11 @@
# result_is_used boolean indicates that the result will be dropped and the
# is_numpy_attribute boolean Is a Numpy module attribute
# result_code/temp_result can safely be set to None
+ # annotation ExprNode or None PEP526 annotation for names or expressions
result_ctype = None
type = None
+ annotation = None
temp_code = None
old_temp = None # error checker for multiple frees etc.
use_managed_ref = True # can be set by optimisation transforms
@@ -847,6 +903,9 @@
if src_type.is_fused:
error(self.pos, "Type is not specialized")
+ elif src_type.is_null_ptr and dst_type.is_ptr:
+ # NULL can be implicitly cast to any pointer type
+ return self
else:
error(self.pos, "Cannot coerce to a type that is not specialized")
@@ -868,16 +927,19 @@
elif not src_type.is_error:
error(self.pos,
"Cannot convert '%s' to memoryviewslice" % (src_type,))
- elif not src.type.conforms_to(dst_type, broadcast=self.is_memview_broadcast,
- copying=self.is_memview_copy_assignment):
- if src.type.dtype.same_as(dst_type.dtype):
- msg = "Memoryview '%s' not conformable to memoryview '%s'."
- tup = src.type, dst_type
- else:
- msg = "Different base types for memoryviews (%s, %s)"
- tup = src.type.dtype, dst_type.dtype
+ else:
+ if src.type.writable_needed:
+ dst_type.writable_needed = True
+ if not src.type.conforms_to(dst_type, broadcast=self.is_memview_broadcast,
+ copying=self.is_memview_copy_assignment):
+ if src.type.dtype.same_as(dst_type.dtype):
+ msg = "Memoryview '%s' not conformable to memoryview '%s'."
+ tup = src.type, dst_type
+ else:
+ msg = "Different base types for memoryviews (%s, %s)"
+ tup = src.type.dtype, dst_type.dtype
- error(self.pos, msg % tup)
+ error(self.pos, msg % tup)
elif dst_type.is_pyobject:
if not src.type.is_pyobject:
@@ -1079,6 +1141,12 @@
def may_be_none(self):
return True
+ def coerce_to(self, dst_type, env):
+ if not (dst_type.is_pyobject or dst_type.is_memoryviewslice or dst_type.is_error):
+ # Catch this error early and loudly.
+ error(self.pos, "Cannot assign None to %s" % dst_type)
+ return super(NoneNode, self).coerce_to(dst_type, env)
+
class EllipsisNode(PyConstNode):
# '...' in a subscript list.
@@ -1141,6 +1209,10 @@
return str(int(self.value))
def coerce_to(self, dst_type, env):
+ if dst_type == self.type:
+ return self
+ if dst_type is py_object_type and self.type is Builtin.bool_type:
+ return self
if dst_type.is_pyobject and self.type.is_int:
return BoolNode(
self.pos, value=self.value,
@@ -1360,19 +1432,28 @@
type = PyrexTypes.parse_basic_type(name)
if type is not None:
return type
- hold_errors()
+
+ global_entry = env.global_scope().lookup(name)
+ if global_entry and global_entry.type and (
+ global_entry.type.is_extension_type
+ or global_entry.type.is_struct_or_union
+ or global_entry.type.is_builtin_type
+ or global_entry.type.is_cpp_class):
+ return global_entry.type
+
from .TreeFragment import TreeFragment
- pos = (pos[0], pos[1], pos[2]-7)
- try:
- declaration = TreeFragment(u"sizeof(%s)" % name, name=pos[0].filename, initial_pos=pos)
- except CompileError:
- sizeof_node = None
- else:
- sizeof_node = declaration.root.stats[0].expr
- sizeof_node = sizeof_node.analyse_types(env)
- release_errors(ignore=True)
- if isinstance(sizeof_node, SizeofTypeNode):
- return sizeof_node.arg_type
+ with local_errors(ignore=True):
+ pos = (pos[0], pos[1], pos[2]-7)
+ try:
+ declaration = TreeFragment(u"sizeof(%s)" % name, name=pos[0].filename, initial_pos=pos)
+ except CompileError:
+ pass
+ else:
+ sizeof_node = declaration.root.stats[0].expr
+ if isinstance(sizeof_node, SizeofTypeNode):
+ sizeof_node = sizeof_node.analyse_types(env)
+ if isinstance(sizeof_node, SizeofTypeNode):
+ return sizeof_node.arg_type
return None
@@ -1426,7 +1507,7 @@
node.type = Builtin.bytes_type
else:
self.check_for_coercion_error(dst_type, env, fail=True)
- return node
+ return node
elif dst_type in (PyrexTypes.c_char_ptr_type, PyrexTypes.c_const_char_ptr_type):
node.type = dst_type
return node
@@ -1435,8 +1516,10 @@
else PyrexTypes.c_char_ptr_type)
return CastNode(node, dst_type)
elif dst_type.assignable_from(PyrexTypes.c_char_ptr_type):
- node.type = dst_type
- return node
+ # Exclude the case of passing a C string literal into a non-const C++ string.
+ if not dst_type.is_cpp_class or dst_type.is_const:
+ node.type = dst_type
+ return node
# We still need to perform normal coerce_to processing on the
# result, because we might be coercing to an extension type,
@@ -1545,15 +1628,17 @@
# decoded by the UTF-8 codec in Py3.3
self.result_code = code.get_py_const(py_object_type, 'ustring')
data_cname = code.get_pyunicode_ptr_const(self.value)
- code = code.get_cached_constants_writer()
- code.mark_pos(self.pos)
- code.putln(
+ const_code = code.get_cached_constants_writer(self.result_code)
+ if const_code is None:
+ return # already initialised
+ const_code.mark_pos(self.pos)
+ const_code.putln(
"%s = PyUnicode_FromUnicode(%s, (sizeof(%s) / sizeof(Py_UNICODE))-1); %s" % (
self.result_code,
data_cname,
data_cname,
- code.error_goto_if_null(self.result_code, self.pos)))
- code.put_error_if_neg(
+ const_code.error_goto_if_null(self.result_code, self.pos)))
+ const_code.put_error_if_neg(
self.pos, "__Pyx_PyUnicode_READY(%s)" % self.result_code)
else:
self.result_code = code.get_py_string_const(self.value)
@@ -1711,12 +1796,7 @@
self.type = error_type
return
self.cpp_check(env)
- constructor = type.scope.lookup(u'')
- if constructor is None:
- func_type = PyrexTypes.CFuncType(
- type, [], exception_check='+', nogil=True)
- type.scope.declare_cfunction(u'', func_type, self.pos)
- constructor = type.scope.lookup(u'')
+ constructor = type.get_constructor(self.pos)
self.class_type = type
self.entry = constructor
self.type = constructor.type
@@ -1830,6 +1910,34 @@
return super(NameNode, self).coerce_to(dst_type, env)
+ def declare_from_annotation(self, env, as_target=False):
+ """Implements PEP 526 annotation typing in a fairly relaxed way.
+
+ Annotations are ignored for global variables, Python class attributes and already declared variables.
+ String literals are allowed and ignored.
+ The ambiguous Python types 'int' and 'long' are ignored and the 'cython.int' form must be used instead.
+ """
+ if not env.directives['annotation_typing']:
+ return
+ if env.is_module_scope or env.is_py_class_scope:
+ # annotations never create global cdef names and Python classes don't support them anyway
+ return
+ name = self.name
+ if self.entry or env.lookup_here(name) is not None:
+ # already declared => ignore annotation
+ return
+
+ annotation = self.annotation
+ if annotation.is_string_literal:
+ # name: "description" => not a type, but still a declared variable or attribute
+ atype = None
+ else:
+ _, atype = analyse_type_annotation(annotation, env)
+ if atype is None:
+ atype = unspecified_type if as_target and env.directives['infer_types'] != False else py_object_type
+ self.entry = env.declare_var(name, atype, self.pos, is_cdef=not as_target)
+ self.entry.annotation = annotation
+
def analyse_as_module(self, env):
# Try to interpret this as a reference to a cimported module.
# Returns the module scope, or None.
@@ -1869,6 +1977,9 @@
def analyse_target_declaration(self, env):
if not self.entry:
self.entry = env.lookup_here(self.name)
+ if not self.entry and self.annotation is not None:
+ # name : type = ...
+ self.declare_from_annotation(env, as_target=True)
if not self.entry:
if env.directives['warn.undeclared']:
warning(self.pos, "implicit declaration of '%s'" % self.name, 1)
@@ -1885,19 +1996,21 @@
def analyse_types(self, env):
self.initialized_check = env.directives['initializedcheck']
- if self.entry is None:
- self.entry = env.lookup(self.name)
- if not self.entry:
- self.entry = env.declare_builtin(self.name, self.pos)
- if not self.entry:
- self.type = PyrexTypes.error_type
- return self
entry = self.entry
- if entry:
- entry.used = 1
- if entry.type.is_buffer:
- from . import Buffer
- Buffer.used_buffer_aux_vars(entry)
+ if entry is None:
+ entry = env.lookup(self.name)
+ if not entry:
+ entry = env.declare_builtin(self.name, self.pos)
+ if entry and entry.is_builtin and entry.is_const:
+ self.is_literal = True
+ if not entry:
+ self.type = PyrexTypes.error_type
+ return self
+ self.entry = entry
+ entry.used = 1
+ if entry.type.is_buffer:
+ from . import Buffer
+ Buffer.used_buffer_aux_vars(entry)
self.analyse_rvalue_entry(env)
return self
@@ -1982,14 +2095,13 @@
py_entry.is_pyglobal = True
py_entry.scope = self.entry.scope
self.entry = py_entry
- elif not (entry.is_const or entry.is_variable
- or entry.is_builtin or entry.is_cfunction
- or entry.is_cpp_class):
- if self.entry.as_variable:
- self.entry = self.entry.as_variable
- elif not self.is_cython_module:
- error(self.pos,
- "'%s' is not a constant, variable or function identifier" % self.name)
+ elif not (entry.is_const or entry.is_variable or
+ entry.is_builtin or entry.is_cfunction or
+ entry.is_cpp_class):
+ if self.entry.as_variable:
+ self.entry = self.entry.as_variable
+ elif not self.is_cython_module:
+ error(self.pos, "'%s' is not a constant, variable or function identifier" % self.name)
def is_cimported_module_without_shadow(self, env):
if self.is_cython_module or self.cython_attribute:
@@ -2035,7 +2147,11 @@
def check_const(self):
entry = self.entry
- if entry is not None and not (entry.is_const or entry.is_cfunction or entry.is_builtin):
+ if entry is not None and not (
+ entry.is_const or
+ entry.is_cfunction or
+ entry.is_builtin or
+ entry.type.is_const):
self.not_const()
return False
return True
@@ -2075,6 +2191,8 @@
entry = self.entry
if entry is None:
return # There was an error earlier
+ if entry.utility_code:
+ code.globalstate.use_utility_code(entry.utility_code)
if entry.is_builtin and entry.is_const:
return # Lookup already cached
elif entry.is_pyclass_attr:
@@ -2095,7 +2213,7 @@
code.globalstate.use_utility_code(
UtilityCode.load_cached("GetModuleGlobalName", "ObjectHandling.c"))
code.putln(
- '%s = __Pyx_GetModuleGlobalName(%s);' % (
+ '__Pyx_GetModuleGlobalName(%s, %s);' % (
self.result(),
interned_cname))
if not self.cf_is_null:
@@ -2124,7 +2242,7 @@
code.globalstate.use_utility_code(
UtilityCode.load_cached("GetModuleGlobalName", "ObjectHandling.c"))
code.putln(
- '%s = __Pyx_GetModuleGlobalName(%s); %s' % (
+ '__Pyx_GetModuleGlobalName(%s, %s); %s' % (
self.result(),
interned_cname,
code.error_goto_if_null(self.result(), self.pos)))
@@ -2133,7 +2251,7 @@
code.globalstate.use_utility_code(
UtilityCode.load_cached("GetNameInClass", "ObjectHandling.c"))
code.putln(
- '%s = __Pyx_GetNameInClass(%s, %s); %s' % (
+ '__Pyx_GetNameInClass(%s, %s, %s); %s' % (
self.result(),
entry.scope.namespace_cname,
interned_cname,
@@ -2177,7 +2295,8 @@
setter = 'PyDict_SetItem'
namespace = Naming.moddict_cname
elif entry.is_pyclass_attr:
- setter = 'PyObject_SetItem'
+ code.globalstate.use_utility_code(UtilityCode.load_cached("SetNameInClass", "ObjectHandling.c"))
+ setter = '__Pyx_SetNameInClass'
else:
assert False, repr(entry)
code.put_error_if_neg(
@@ -2245,12 +2364,20 @@
if overloaded_assignment:
result = rhs.result()
if exception_check == '+':
- translate_cpp_exception(code, self.pos, '%s = %s;' % (self.result(), result), exception_value, self.in_nogil_context)
+ translate_cpp_exception(
+ code, self.pos,
+ '%s = %s;' % (self.result(), result),
+ self.result() if self.type.is_pyobject else None,
+ exception_value, self.in_nogil_context)
else:
code.putln('%s = %s;' % (self.result(), result))
else:
result = rhs.result_as(self.ctype())
- code.putln('%s = %s;' % (self.result(), result))
+
+ if is_pythran_expr(self.type):
+ code.putln('new (&%s) decltype(%s){%s};' % (self.result(), self.result(), result))
+ elif result != self.result():
+ code.putln('%s = %s;' % (self.result(), result))
if debug_disposal_code:
print("NameNode.generate_assignment_code:")
print("...generating post-assignment code for %s" % rhs)
@@ -2700,8 +2827,7 @@
code.putln("if (unlikely(!%s)) {" % result_name)
code.putln("PyObject* exc_type = PyErr_Occurred();")
code.putln("if (exc_type) {")
- code.putln("if (likely(exc_type == PyExc_StopIteration ||"
- " PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();")
+ code.putln("if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();")
code.putln("else %s" % code.error_goto(self.pos))
code.putln("}")
code.putln("break;")
@@ -2835,18 +2961,18 @@
# The __exit__() call of a 'with' statement. Used in both the
# except and finally clauses.
- # with_stat WithStatNode the surrounding 'with' statement
- # args TupleNode or ResultStatNode the exception info tuple
- # await AwaitExprNode the await expression of an 'async with' statement
+ # with_stat WithStatNode the surrounding 'with' statement
+ # args TupleNode or ResultStatNode the exception info tuple
+ # await_expr AwaitExprNode the await expression of an 'async with' statement
- subexprs = ['args', 'await']
+ subexprs = ['args', 'await_expr']
test_if_run = True
- await = None
+ await_expr = None
def analyse_types(self, env):
self.args = self.args.analyse_types(env)
- if self.await:
- self.await = self.await.analyse_types(env)
+ if self.await_expr:
+ self.await_expr = self.await_expr.analyse_types(env)
self.type = PyrexTypes.c_bint_type
self.is_temp = True
return self
@@ -2873,12 +2999,12 @@
code.putln(code.error_goto_if_null(result_var, self.pos))
code.put_gotref(result_var)
- if self.await:
+ if self.await_expr:
# FIXME: result_var temp currently leaks into the closure
- self.await.generate_evaluation_code(code, source_cname=result_var, decref_source=True)
- code.putln("%s = %s;" % (result_var, self.await.py_result()))
- self.await.generate_post_assignment_code(code)
- self.await.free_temps(code)
+ self.await_expr.generate_evaluation_code(code, source_cname=result_var, decref_source=True)
+ code.putln("%s = %s;" % (result_var, self.await_expr.py_result()))
+ self.await_expr.generate_post_assignment_code(code)
+ self.await_expr.free_temps(code)
if self.result_is_used:
self.allocate_temp_result(code)
@@ -3038,12 +3164,27 @@
is_ascii = False
if isinstance(node, UnicodeNode):
try:
+ # most strings will be ASCII or at least Latin-1
node.value.encode('iso8859-1')
max_char_value = '255'
node.value.encode('us-ascii')
is_ascii = True
except UnicodeEncodeError:
- pass
+ if max_char_value != '255':
+ # not ISO8859-1 => check BMP limit
+ max_char = max(map(ord, node.value))
+ if max_char < 0xD800:
+ # BMP-only, no surrogate pairs used
+ max_char_value = '65535'
+ ulength = str(len(node.value))
+ elif max_char >= 65536:
+ # cleary outside of BMP, and not on a 16-bit Unicode system
+ max_char_value = '1114111'
+ ulength = str(len(node.value))
+ else:
+ # not really worth implementing a check for surrogate pairs here
+ # drawback: C code can differ when generating on Py2 with 2-byte Unicode
+ pass
else:
ulength = str(len(node.value))
elif isinstance(node, FormattedValueNode) and node.value.type.is_numeric:
@@ -3092,7 +3233,7 @@
c_format_spec = None
find_conversion_func = {
- 's': 'PyObject_Str',
+ 's': 'PyObject_Unicode',
'r': 'PyObject_Repr',
'a': 'PyObject_ASCII', # NOTE: mapped to PyObject_Repr() in Py2
}.get
@@ -3112,7 +3253,7 @@
self.format_spec = self.format_spec.analyse_types(env).coerce_to_pyobject(env)
if self.c_format_spec is None:
self.value = self.value.coerce_to_pyobject(env)
- if not self.format_spec and not self.conversion_char:
+ if not self.format_spec and (not self.conversion_char or self.conversion_char == 's'):
if self.value.type is unicode_type and not self.value.may_be_none():
# value is definitely a unicode string and we don't format it any special
return self.value
@@ -3242,7 +3383,7 @@
# in most cases, indexing will return a safe reference to an object in a container,
# so we consider the result safe if the base object is
return self.base.is_ephemeral() or self.base.type in (
- basestring_type, str_type, bytes_type, unicode_type)
+ basestring_type, str_type, bytes_type, bytearray_type, unicode_type)
def check_const_addr(self):
return self.base.check_const_addr() and self.index.check_const()
@@ -3279,10 +3420,6 @@
is_subscript = True
is_fused_index = False
- def __init__(self, pos, index, **kw):
- ExprNode.__init__(self, pos, index=index, **kw)
- self._index = index
-
def calculate_constant_result(self):
self.constant_result = self.base.constant_result[self.index.constant_result]
@@ -3306,7 +3443,7 @@
return False
if isinstance(self.index, SliceNode):
# slicing!
- if base_type in (bytes_type, str_type, unicode_type,
+ if base_type in (bytes_type, bytearray_type, str_type, unicode_type,
basestring_type, list_type, tuple_type):
return False
return ExprNode.may_be_none(self)
@@ -3327,10 +3464,22 @@
positional_args=template_values,
keyword_args=None)
return type_node.analyse(env, base_type=base_type)
+ elif self.index.is_slice or self.index.is_sequence_constructor:
+ # memory view
+ from . import MemoryView
+ env.use_utility_code(MemoryView.view_utility_code)
+ axes = [self.index] if self.index.is_slice else list(self.index.args)
+ return PyrexTypes.MemoryViewSliceType(base_type, MemoryView.get_axes_specs(env, axes))
else:
+ # C array
index = self.index.compile_time_value(env)
if index is not None:
- return PyrexTypes.CArrayType(base_type, int(index))
+ try:
+ index = int(index)
+ except (ValueError, TypeError):
+ pass
+ else:
+ return PyrexTypes.CArrayType(base_type, index)
error(self.pos, "Array size must be a compile time constant")
return None
@@ -3406,6 +3555,10 @@
if index_func is not None:
return index_func.type.return_type
+ if is_pythran_expr(base_type) and is_pythran_expr(index_type):
+ index_with_type = (self.index, index_type)
+ return PythranExpr(pythran_indexing_type(base_type, [index_with_type]))
+
# may be slicing or indexing, we don't know
if base_type in (unicode_type, str_type):
# these types always returns their own type on Python indexing/slicing
@@ -3531,7 +3684,7 @@
else:
# not using 'uchar' to enable fast and safe error reporting as '-1'
self.type = PyrexTypes.c_int_type
- elif is_slice and base_type in (bytes_type, str_type, unicode_type, list_type, tuple_type):
+ elif is_slice and base_type in (bytes_type, bytearray_type, str_type, unicode_type, list_type, tuple_type):
self.type = base_type
else:
item_type = None
@@ -3632,23 +3785,33 @@
else:
indices = [self.index]
- base_type = self.base.type
+ base = self.base
+ base_type = base.type
replacement_node = None
if base_type.is_memoryviewslice:
# memoryviewslice indexing or slicing
from . import MemoryView
+ if base.is_memview_slice:
+ # For memory views, "view[i][j]" is the same as "view[i, j]" => use the latter for speed.
+ merged_indices = base.merged_indices(indices)
+ if merged_indices is not None:
+ base = base.base
+ base_type = base.type
+ indices = merged_indices
have_slices, indices, newaxes = MemoryView.unellipsify(indices, base_type.ndim)
if have_slices:
- replacement_node = MemoryViewSliceNode(self.pos, indices=indices, base=self.base)
+ replacement_node = MemoryViewSliceNode(self.pos, indices=indices, base=base)
else:
- replacement_node = MemoryViewIndexNode(self.pos, indices=indices, base=self.base)
+ replacement_node = MemoryViewIndexNode(self.pos, indices=indices, base=base)
elif base_type.is_buffer or base_type.is_pythran_expr:
if base_type.is_pythran_expr or len(indices) == base_type.ndim:
# Buffer indexing
is_buffer_access = True
indices = [index.analyse_types(env) for index in indices]
if base_type.is_pythran_expr:
- do_replacement = all(index.type.is_int or index.is_slice or index.type.is_pythran_expr for index in indices)
+ do_replacement = all(
+ index.type.is_int or index.is_slice or index.type.is_pythran_expr
+ for index in indices)
if do_replacement:
for i,index in enumerate(indices):
if index.is_slice:
@@ -3658,7 +3821,7 @@
else:
do_replacement = all(index.type.is_int for index in indices)
if do_replacement:
- replacement_node = BufferIndexNode(self.pos, indices=indices, base=self.base)
+ replacement_node = BufferIndexNode(self.pos, indices=indices, base=base)
# On cloning, indices is cloned. Otherwise, unpack index into indices.
assert not isinstance(self.index, CloneNode)
@@ -3825,6 +3988,8 @@
if not self.is_temp:
# all handled in self.calculate_result_code()
return
+
+ utility_code = None
if self.type.is_pyobject:
error_value = 'NULL'
if self.index.type.is_int:
@@ -3834,32 +3999,38 @@
function = "__Pyx_GetItemInt_Tuple"
else:
function = "__Pyx_GetItemInt"
- code.globalstate.use_utility_code(
- TempitaUtilityCode.load_cached("GetItemInt", "ObjectHandling.c"))
+ utility_code = TempitaUtilityCode.load_cached("GetItemInt", "ObjectHandling.c")
else:
if self.base.type is dict_type:
function = "__Pyx_PyDict_GetItem"
- code.globalstate.use_utility_code(
- UtilityCode.load_cached("DictGetItem", "ObjectHandling.c"))
+ utility_code = UtilityCode.load_cached("DictGetItem", "ObjectHandling.c")
+ elif self.base.type is py_object_type and self.index.type in (str_type, unicode_type):
+ # obj[str] is probably doing a dict lookup
+ function = "__Pyx_PyObject_Dict_GetItem"
+ utility_code = UtilityCode.load_cached("DictGetItem", "ObjectHandling.c")
else:
- function = "PyObject_GetItem"
+ function = "__Pyx_PyObject_GetItem"
+ code.globalstate.use_utility_code(
+ TempitaUtilityCode.load_cached("GetItemInt", "ObjectHandling.c"))
+ utility_code = UtilityCode.load_cached("ObjectGetItem", "ObjectHandling.c")
elif self.type.is_unicode_char and self.base.type is unicode_type:
assert self.index.type.is_int
function = "__Pyx_GetItemInt_Unicode"
error_value = '(Py_UCS4)-1'
- code.globalstate.use_utility_code(
- UtilityCode.load_cached("GetItemIntUnicode", "StringTools.c"))
+ utility_code = UtilityCode.load_cached("GetItemIntUnicode", "StringTools.c")
elif self.base.type is bytearray_type:
assert self.index.type.is_int
assert self.type.is_int
function = "__Pyx_GetItemInt_ByteArray"
error_value = '-1'
- code.globalstate.use_utility_code(
- UtilityCode.load_cached("GetItemIntByteArray", "StringTools.c"))
+ utility_code = UtilityCode.load_cached("GetItemIntByteArray", "StringTools.c")
elif not (self.base.type.is_cpp_class and self.exception_check):
assert False, "unexpected type %s and base type %s for indexing" % (
self.type, self.base.type)
+ if utility_code is not None:
+ code.globalstate.use_utility_code(utility_code)
+
if self.index.type.is_int:
index_code = self.index.result()
else:
@@ -3869,6 +4040,7 @@
translate_cpp_exception(code, self.pos,
"%s = %s[%s];" % (self.result(), self.base.result(),
self.index.result()),
+ self.result() if self.type.is_pyobject else None,
self.exception_value, self.in_nogil_context)
else:
error_check = '!%s' if error_value == 'NULL' else '%%s == %s' % error_value
@@ -3939,6 +4111,7 @@
# both exception handlers are the same.
translate_cpp_exception(code, self.pos,
"%s = %s;" % (self.result(), rhs.result()),
+ self.result() if self.type.is_pyobject else None,
self.exception_value, self.in_nogil_context)
else:
code.putln(
@@ -4056,7 +4229,8 @@
def analyse_buffer_index(self, env, getting):
if is_pythran_expr(self.base.type):
- self.type = PythranExpr(pythran_indexing_type(self.base.type, self.indices))
+ index_with_type_list = [(idx, idx.type) for idx in self.indices]
+ self.type = PythranExpr(pythran_indexing_type(self.base.type, index_with_type_list))
else:
self.base = self.base.coerce_to_simple(env)
self.type = self.base.type.dtype
@@ -4078,10 +4252,6 @@
def nogil_check(self, env):
if self.is_buffer_access or self.is_memview_index:
- if env.directives['boundscheck']:
- warning(self.pos, "Use boundscheck(False) for faster access",
- level=1)
-
if self.type.is_pyobject:
error(self.pos, "Cannot access buffer with object dtype without gil")
self.type = error_type
@@ -4108,6 +4278,11 @@
"""
ndarray[1, 2, 3] and memslice[1, 2, 3]
"""
+ if self.in_nogil_context:
+ if self.is_buffer_access or self.is_memview_index:
+ if code.globalstate.directives['boundscheck']:
+ warning(self.pos, "Use boundscheck(False) for faster access", level=1)
+
# Assign indices to temps of at least (s)size_t to allow further index calculations.
index_temps = [self.get_index_in_temp(code,ivar) for ivar in self.indices]
@@ -4141,7 +4316,7 @@
if is_pythran_expr(base_type) and is_pythran_supported_type(rhs.type):
obj = code.funcstate.allocate_temp(PythranExpr(pythran_type(self.base.type)), manage_ref=False)
# We have got to do this because we have to declare pythran objects
- # at the beggining of the functions.
+ # at the beginning of the functions.
# Indeed, Cython uses "goto" statement for error management, and
# RAII doesn't work with that kind of construction.
# Moreover, the way Pythran expressions are made is that they don't
@@ -4150,7 +4325,7 @@
# case.
code.putln("__Pyx_call_destructor(%s);" % obj)
code.putln("new (&%s) decltype(%s){%s};" % (obj, obj, self.base.pythran_result()))
- code.putln("%s(%s) %s= %s;" % (
+ code.putln("%s%s %s= %s;" % (
obj,
pythran_indexing_code(self.indices),
op,
@@ -4181,7 +4356,7 @@
if is_pythran_expr(self.base.type):
res = self.result()
code.putln("__Pyx_call_destructor(%s);" % res)
- code.putln("new (&%s) decltype(%s){%s(%s)};" % (
+ code.putln("new (&%s) decltype(%s){%s%s};" % (
res,
res,
self.base.pythran_result(),
@@ -4210,6 +4385,11 @@
indices = self.indices
have_slices, indices, newaxes = MemoryView.unellipsify(indices, self.base.type.ndim)
+ if not getting:
+ self.writable_needed = True
+ if self.base.is_name or self.base.is_attribute:
+ self.base.entry.type.writable_needed = True
+
self.memslice_index = (not newaxes and len(indices) == self.base.type.ndim)
axes = []
@@ -4357,6 +4537,37 @@
else:
return MemoryCopySlice(self.pos, self)
+ def merged_indices(self, indices):
+ """Return a new list of indices/slices with 'indices' merged into the current ones
+ according to slicing rules.
+ Is used to implement "view[i][j]" => "view[i, j]".
+ Return None if the indices cannot (easily) be merged at compile time.
+ """
+ if not indices:
+ return None
+ # NOTE: Need to evaluate "self.original_indices" here as they might differ from "self.indices".
+ new_indices = self.original_indices[:]
+ indices = indices[:]
+ for i, s in enumerate(self.original_indices):
+ if s.is_slice:
+ if s.start.is_none and s.stop.is_none and s.step.is_none:
+ # Full slice found, replace by index.
+ new_indices[i] = indices[0]
+ indices.pop(0)
+ if not indices:
+ return new_indices
+ else:
+ # Found something non-trivial, e.g. a partial slice.
+ return None
+ elif not s.type.is_int:
+ # Not a slice, not an integer index => could be anything...
+ return None
+ if indices:
+ if len(new_indices) + len(indices) > self.base.type.ndim:
+ return None
+ new_indices += indices
+ return new_indices
+
def is_simple(self):
if self.is_ellipsis_noop:
# TODO: fix SimpleCallNode.is_simple()
@@ -4528,7 +4739,7 @@
return bytes_type
elif base_type.is_pyunicode_ptr:
return unicode_type
- elif base_type in (bytes_type, str_type, unicode_type,
+ elif base_type in (bytes_type, bytearray_type, str_type, unicode_type,
basestring_type, list_type, tuple_type):
return base_type
elif base_type.is_ptr or base_type.is_array:
@@ -4599,7 +4810,7 @@
start=self.start or none_node,
stop=self.stop or none_node,
step=none_node)
- index_node = IndexNode(self.pos, index, base=self.base)
+ index_node = IndexNode(self.pos, index=index, base=self.base)
return index_node.analyse_base_and_index_types(
env, getting=getting, setting=not getting,
analyse_base=False)
@@ -4651,13 +4862,59 @@
).analyse_types(env)
else:
c_int = PyrexTypes.c_py_ssize_t_type
+
+ def allow_none(node, default_value, env):
+ # Coerce to Py_ssize_t, but allow None as meaning the default slice bound.
+ from .UtilNodes import EvalWithTempExprNode, ResultRefNode
+
+ node_ref = ResultRefNode(node)
+ new_expr = CondExprNode(
+ node.pos,
+ true_val=IntNode(
+ node.pos,
+ type=c_int,
+ value=default_value,
+ constant_result=int(default_value) if default_value.isdigit() else not_a_constant,
+ ),
+ false_val=node_ref.coerce_to(c_int, env),
+ test=PrimaryCmpNode(
+ node.pos,
+ operand1=node_ref,
+ operator='is',
+ operand2=NoneNode(node.pos),
+ ).analyse_types(env)
+ ).analyse_result_type(env)
+ return EvalWithTempExprNode(node_ref, new_expr)
+
if self.start:
+ if self.start.type.is_pyobject:
+ self.start = allow_none(self.start, '0', env)
self.start = self.start.coerce_to(c_int, env)
if self.stop:
+ if self.stop.type.is_pyobject:
+ self.stop = allow_none(self.stop, 'PY_SSIZE_T_MAX', env)
self.stop = self.stop.coerce_to(c_int, env)
self.is_temp = 1
return self
+ def analyse_as_type(self, env):
+ base_type = self.base.analyse_as_type(env)
+ if base_type and not base_type.is_pyobject:
+ if not self.start and not self.stop:
+ # memory view
+ from . import MemoryView
+ env.use_utility_code(MemoryView.view_utility_code)
+ none_node = NoneNode(self.pos)
+ slice_node = SliceNode(
+ self.pos,
+ start=none_node,
+ stop=none_node,
+ step=none_node,
+ )
+ return PyrexTypes.MemoryViewSliceType(
+ base_type, MemoryView.get_axes_specs(env, [slice_node]))
+ return None
+
nogil_check = Node.gil_error
gil_message = "Slicing Python object"
@@ -5003,8 +5260,11 @@
def generate_result_code(self, code):
if self.is_literal:
- self.result_code = code.get_py_const(py_object_type, 'slice', cleanup_level=2)
- code = code.get_cached_constants_writer()
+ dedup_key = make_dedup_key(self.type, (self,))
+ self.result_code = code.get_py_const(py_object_type, 'slice', cleanup_level=2, dedup_key=dedup_key)
+ code = code.get_cached_constants_writer(self.result_code)
+ if code is None:
+ return # already initialised
code.mark_pos(self.pos)
code.putln(
@@ -5140,6 +5400,32 @@
return False
return ExprNode.may_be_none(self)
+ def set_py_result_type(self, function, func_type=None):
+ if func_type is None:
+ func_type = function.type
+ if func_type is Builtin.type_type and (
+ function.is_name and
+ function.entry and
+ function.entry.is_builtin and
+ function.entry.name in Builtin.types_that_construct_their_instance):
+ # calling a builtin type that returns a specific object type
+ if function.entry.name == 'float':
+ # the following will come true later on in a transform
+ self.type = PyrexTypes.c_double_type
+ self.result_ctype = PyrexTypes.c_double_type
+ else:
+ self.type = Builtin.builtin_types[function.entry.name]
+ self.result_ctype = py_object_type
+ self.may_return_none = False
+ elif function.is_name and function.type_entry:
+ # We are calling an extension type constructor. As long as we do not
+ # support __new__(), the result type is clear
+ self.type = function.type_entry.type
+ self.result_ctype = py_object_type
+ self.may_return_none = False
+ else:
+ self.type = py_object_type
+
def analyse_as_type_constructor(self, env):
type = self.function.analyse_as_type(env)
if type and type.is_struct_or_union:
@@ -5202,6 +5488,7 @@
has_optional_args = False
nogil = False
analysed = False
+ overflowcheck = False
def compile_time_value(self, denv):
function = self.function.compile_time_value(denv)
@@ -5222,6 +5509,11 @@
error(self.args[0].pos, "Unknown type")
else:
return PyrexTypes.CPtrType(type)
+ elif attr == 'typeof':
+ if len(self.args) != 1:
+ error(self.args.pos, "only one type allowed.")
+ operand = self.args[0].analyse_types(env)
+ return operand.type
def explicit_args_kwds(self):
return self.args, None
@@ -5244,7 +5536,8 @@
func_type = self.function_type()
self.is_numpy_call_with_exprs = False
- if has_np_pythran(env) and self.function.is_numpy_attribute:
+ if (has_np_pythran(env) and function.is_numpy_attribute and
+ pythran_is_numpy_func_supported(function)):
has_pythran_args = True
self.arg_tuple = TupleNode(self.pos, args = self.args)
self.arg_tuple = self.arg_tuple.analyse_types(env)
@@ -5252,37 +5545,18 @@
has_pythran_args &= is_pythran_supported_node_or_none(arg)
self.is_numpy_call_with_exprs = bool(has_pythran_args)
if self.is_numpy_call_with_exprs:
- self.args = None
- env.add_include_file("pythonic/numpy/%s.hpp" % self.function.attribute)
- self.type = PythranExpr(pythran_func_type(self.function.attribute, self.arg_tuple.args))
- self.may_return_none = True
- self.is_temp = 1
+ env.add_include_file(pythran_get_func_include_file(function))
+ return NumPyMethodCallNode.from_node(
+ self,
+ function=function,
+ arg_tuple=self.arg_tuple,
+ type=PythranExpr(pythran_func_type(function, self.arg_tuple.args)),
+ )
elif func_type.is_pyobject:
self.arg_tuple = TupleNode(self.pos, args = self.args)
self.arg_tuple = self.arg_tuple.analyse_types(env).coerce_to_pyobject(env)
self.args = None
- if func_type is Builtin.type_type and function.is_name and \
- function.entry and \
- function.entry.is_builtin and \
- function.entry.name in Builtin.types_that_construct_their_instance:
- # calling a builtin type that returns a specific object type
- if function.entry.name == 'float':
- # the following will come true later on in a transform
- self.type = PyrexTypes.c_double_type
- self.result_ctype = PyrexTypes.c_double_type
- else:
- self.type = Builtin.builtin_types[function.entry.name]
- self.result_ctype = py_object_type
- self.may_return_none = False
- elif function.is_name and function.type_entry:
- # We are calling an extension type constructor. As
- # long as we do not support __new__(), the result type
- # is clear
- self.type = function.type_entry.type
- self.result_ctype = py_object_type
- self.may_return_none = False
- else:
- self.type = py_object_type
+ self.set_py_result_type(function, func_type)
self.is_temp = 1
else:
self.args = [ arg.analyse_types(env) for arg in self.args ]
@@ -5377,7 +5651,7 @@
if formal_arg.not_none:
if self.self:
self.self = self.self.as_none_safe_node(
- "'NoneType' object has no attribute '%s'",
+ "'NoneType' object has no attribute '%{0}s'".format('.30' if len(entry.name) <= 30 else ''),
error='PyExc_AttributeError',
format_args=[entry.name])
else:
@@ -5403,8 +5677,6 @@
for i in range(min(max_nargs, actual_nargs)):
formal_arg = func_type.args[i]
formal_type = formal_arg.type
- if formal_type.is_const:
- formal_type = formal_type.const_base_type
arg = args[i].coerce_to(formal_type, env)
if formal_arg.not_none:
# C methods must do the None checks at *call* time
@@ -5511,6 +5783,8 @@
if func_type.exception_value is None:
env.use_utility_code(UtilityCode.load_cached("CppExceptionConversion", "CppSupport.cpp"))
+ self.overflowcheck = env.directives['overflowcheck']
+
def calculate_result_code(self):
return self.c_call_code()
@@ -5550,29 +5824,64 @@
return False # skip allocation of unused result temp
return True
+ def generate_evaluation_code(self, code):
+ function = self.function
+ if function.is_name or function.is_attribute:
+ code.globalstate.use_entry_utility_code(function.entry)
+
+ if not function.type.is_pyobject or len(self.arg_tuple.args) > 1 or (
+ self.arg_tuple.args and self.arg_tuple.is_literal):
+ super(SimpleCallNode, self).generate_evaluation_code(code)
+ return
+
+ # Special case 0-args and try to avoid explicit tuple creation for Python calls with 1 arg.
+ arg = self.arg_tuple.args[0] if self.arg_tuple.args else None
+ subexprs = (self.self, self.coerced_self, function, arg)
+ for subexpr in subexprs:
+ if subexpr is not None:
+ subexpr.generate_evaluation_code(code)
+
+ code.mark_pos(self.pos)
+ assert self.is_temp
+ self.allocate_temp_result(code)
+
+ if arg is None:
+ code.globalstate.use_utility_code(UtilityCode.load_cached(
+ "PyObjectCallNoArg", "ObjectHandling.c"))
+ code.putln(
+ "%s = __Pyx_PyObject_CallNoArg(%s); %s" % (
+ self.result(),
+ function.py_result(),
+ code.error_goto_if_null(self.result(), self.pos)))
+ else:
+ code.globalstate.use_utility_code(UtilityCode.load_cached(
+ "PyObjectCallOneArg", "ObjectHandling.c"))
+ code.putln(
+ "%s = __Pyx_PyObject_CallOneArg(%s, %s); %s" % (
+ self.result(),
+ function.py_result(),
+ arg.py_result(),
+ code.error_goto_if_null(self.result(), self.pos)))
+
+ code.put_gotref(self.py_result())
+
+ for subexpr in subexprs:
+ if subexpr is not None:
+ subexpr.generate_disposal_code(code)
+ subexpr.free_temps(code)
+
def generate_result_code(self, code):
func_type = self.function_type()
- if self.function.is_name or self.function.is_attribute:
- code.globalstate.use_entry_utility_code(self.function.entry)
if func_type.is_pyobject:
- if func_type is not type_type and not self.arg_tuple.args and self.arg_tuple.is_literal:
- code.globalstate.use_utility_code(UtilityCode.load_cached(
- "PyObjectCallNoArg", "ObjectHandling.c"))
- code.putln(
- "%s = __Pyx_PyObject_CallNoArg(%s); %s" % (
- self.result(),
- self.function.py_result(),
- code.error_goto_if_null(self.result(), self.pos)))
- else:
- arg_code = self.arg_tuple.py_result()
- code.globalstate.use_utility_code(UtilityCode.load_cached(
- "PyObjectCall", "ObjectHandling.c"))
- code.putln(
- "%s = __Pyx_PyObject_Call(%s, %s, NULL); %s" % (
- self.result(),
- self.function.py_result(),
- arg_code,
- code.error_goto_if_null(self.result(), self.pos)))
+ arg_code = self.arg_tuple.py_result()
+ code.globalstate.use_utility_code(UtilityCode.load_cached(
+ "PyObjectCall", "ObjectHandling.c"))
+ code.putln(
+ "%s = __Pyx_PyObject_Call(%s, %s, NULL); %s" % (
+ self.result(),
+ self.function.py_result(),
+ arg_code,
+ code.error_goto_if_null(self.result(), self.pos)))
code.put_gotref(self.py_result())
elif func_type.is_cfunction:
if self.has_optional_args:
@@ -5596,11 +5905,11 @@
elif self.type.is_memoryviewslice:
assert self.is_temp
exc_checks.append(self.type.error_condition(self.result()))
- else:
+ elif func_type.exception_check != '+':
exc_val = func_type.exception_value
exc_check = func_type.exception_check
if exc_val is not None:
- exc_checks.append("%s == %s" % (self.result(), exc_val))
+ exc_checks.append("%s == %s" % (self.result(), func_type.return_type.cast_code(exc_val)))
if exc_check:
if self.nogil:
exc_checks.append("__Pyx_ErrOccurredWithGIL()")
@@ -5619,9 +5928,16 @@
lhs = ""
if func_type.exception_check == '+':
translate_cpp_exception(code, self.pos, '%s%s;' % (lhs, rhs),
+ self.result() if self.type.is_pyobject else None,
func_type.exception_value, self.nogil)
else:
- if exc_checks:
+ if (self.overflowcheck
+ and self.type.is_int
+ and self.type.signed
+ and self.function.result() in ('abs', 'labs', '__Pyx_abs_longlong')):
+ goto_error = 'if (unlikely(%s < 0)) { PyErr_SetString(PyExc_OverflowError, "value too large"); %s; }' % (
+ self.result(), code.error_goto(self.pos))
+ elif exc_checks:
goto_error = code.error_goto_if(" && ".join(exc_checks), self.pos)
else:
goto_error = ""
@@ -5631,11 +5947,34 @@
if self.has_optional_args:
code.funcstate.release_temp(self.opt_arg_struct)
- @classmethod
- def from_node(cls, node, **kwargs):
- ret = super(SimpleCallNode, cls).from_node(node, **kwargs)
- ret.is_numpy_call_with_exprs = node.is_numpy_call_with_exprs
- return ret
+
+class NumPyMethodCallNode(SimpleCallNode):
+ # Pythran call to a NumPy function or method.
+ #
+ # function ExprNode the function/method to call
+ # arg_tuple TupleNode the arguments as an args tuple
+
+ subexprs = ['function', 'arg_tuple']
+ is_temp = True
+ may_return_none = True
+
+ def generate_evaluation_code(self, code):
+ code.mark_pos(self.pos)
+ self.allocate_temp_result(code)
+
+ self.function.generate_evaluation_code(code)
+ assert self.arg_tuple.mult_factor is None
+ args = self.arg_tuple.args
+ for arg in args:
+ arg.generate_evaluation_code(code)
+
+ code.putln("// function evaluation code for numpy function")
+ code.putln("__Pyx_call_destructor(%s);" % self.result())
+ code.putln("new (&%s) decltype(%s){%s{}(%s)};" % (
+ self.result(),
+ self.result(),
+ pythran_functor(self.function),
+ ", ".join(a.pythran_result() for a in args)))
class PyMethodCallNode(SimpleCallNode):
@@ -5658,16 +5997,6 @@
for arg in args:
arg.generate_evaluation_code(code)
- if self.is_numpy_call_with_exprs:
- code.putln("// function evaluation code for numpy function")
- code.putln("__Pyx_call_destructor(%s);" % self.result())
- code.putln("new (&%s) decltype(%s){pythonic::numpy::functor::%s{}(%s)};" % (
- self.result(),
- self.result(),
- self.function.attribute,
- ", ".join(a.pythran_result() for a in self.arg_tuple.args)))
- return
-
# make sure function is in temp so that we can replace the reference below if it's a method
reuse_function_temp = self.function.is_temp
if reuse_function_temp:
@@ -5724,44 +6053,38 @@
if not args:
# fastest special case: try to avoid tuple creation
- code.putln("if (%s) {" % self_arg)
+ code.globalstate.use_utility_code(
+ UtilityCode.load_cached("PyObjectCallNoArg", "ObjectHandling.c"))
code.globalstate.use_utility_code(
UtilityCode.load_cached("PyObjectCallOneArg", "ObjectHandling.c"))
code.putln(
- "%s = __Pyx_PyObject_CallOneArg(%s, %s); %s" % (
- self.result(),
+ "%s = (%s) ? __Pyx_PyObject_CallOneArg(%s, %s) : __Pyx_PyObject_CallNoArg(%s);" % (
+ self.result(), self_arg,
function, self_arg,
- code.error_goto_if_null(self.result(), self.pos)))
- code.put_decref_clear(self_arg, py_object_type)
+ function))
+ code.put_xdecref_clear(self_arg, py_object_type)
code.funcstate.release_temp(self_arg)
- code.putln("} else {")
+ code.putln(code.error_goto_if_null(self.result(), self.pos))
+ code.put_gotref(self.py_result())
+ elif len(args) == 1:
+ # fastest special case: try to avoid tuple creation
code.globalstate.use_utility_code(
- UtilityCode.load_cached("PyObjectCallNoArg", "ObjectHandling.c"))
+ UtilityCode.load_cached("PyObjectCall2Args", "ObjectHandling.c"))
+ code.globalstate.use_utility_code(
+ UtilityCode.load_cached("PyObjectCallOneArg", "ObjectHandling.c"))
+ arg = args[0]
code.putln(
- "%s = __Pyx_PyObject_CallNoArg(%s); %s" % (
- self.result(),
- function,
- code.error_goto_if_null(self.result(), self.pos)))
- code.putln("}")
+ "%s = (%s) ? __Pyx_PyObject_Call2Args(%s, %s, %s) : __Pyx_PyObject_CallOneArg(%s, %s);" % (
+ self.result(), self_arg,
+ function, self_arg, arg.py_result(),
+ function, arg.py_result()))
+ code.put_xdecref_clear(self_arg, py_object_type)
+ code.funcstate.release_temp(self_arg)
+ arg.generate_disposal_code(code)
+ arg.free_temps(code)
+ code.putln(code.error_goto_if_null(self.result(), self.pos))
code.put_gotref(self.py_result())
else:
- if len(args) == 1:
- code.putln("if (!%s) {" % self_arg)
- code.globalstate.use_utility_code(
- UtilityCode.load_cached("PyObjectCallOneArg", "ObjectHandling.c"))
- arg = args[0]
- code.putln(
- "%s = __Pyx_PyObject_CallOneArg(%s, %s); %s" % (
- self.result(),
- function, arg.py_result(),
- code.error_goto_if_null(self.result(), self.pos)))
- arg.generate_disposal_code(code)
- code.put_gotref(self.py_result())
- code.putln("} else {")
- arg_offset = 1
- else:
- arg_offset = arg_offset_cname
-
code.globalstate.use_utility_code(
UtilityCode.load_cached("PyFunctionFastCall", "ObjectHandling.c"))
code.globalstate.use_utility_code(
@@ -5779,9 +6102,9 @@
call_prefix,
function,
Naming.quick_temp_cname,
- arg_offset,
+ arg_offset_cname,
len(args),
- arg_offset,
+ arg_offset_cname,
code.error_goto_if_null(self.result(), self.pos)))
code.put_xdecref_clear(self_arg, py_object_type)
code.put_gotref(self.py_result())
@@ -5793,7 +6116,7 @@
code.putln("{")
args_tuple = code.funcstate.allocate_temp(py_object_type, manage_ref=True)
code.putln("%s = PyTuple_New(%d+%s); %s" % (
- args_tuple, len(args), arg_offset,
+ args_tuple, len(args), arg_offset_cname,
code.error_goto_if_null(args_tuple, self.pos)))
code.put_gotref(args_tuple)
@@ -5809,7 +6132,7 @@
arg.make_owned_reference(code)
code.put_giveref(arg.py_result())
code.putln("PyTuple_SET_ITEM(%s, %d+%s, %s);" % (
- args_tuple, i, arg_offset, arg.py_result()))
+ args_tuple, i, arg_offset_cname, arg.py_result()))
if len(args) > 1:
code.funcstate.release_temp(arg_offset_cname)
@@ -5978,6 +6301,37 @@
SimpleCallNode.__init__(self, pos, **kwargs)
+class CachedBuiltinMethodCallNode(CallNode):
+ # Python call to a method of a known Python builtin (only created in transforms)
+
+ subexprs = ['obj', 'args']
+ is_temp = True
+
+ def __init__(self, call_node, obj, method_name, args):
+ super(CachedBuiltinMethodCallNode, self).__init__(
+ call_node.pos,
+ obj=obj, method_name=method_name, args=args,
+ may_return_none=call_node.may_return_none,
+ type=call_node.type)
+
+ def may_be_none(self):
+ if self.may_return_none is not None:
+ return self.may_return_none
+ return ExprNode.may_be_none(self)
+
+ def generate_result_code(self, code):
+ type_cname = self.obj.type.cname
+ obj_cname = self.obj.py_result()
+ args = [arg.py_result() for arg in self.args]
+ call_code = code.globalstate.cached_unbound_method_call_code(
+ obj_cname, type_cname, self.method_name, args)
+ code.putln("%s = %s; %s" % (
+ self.result(), call_code,
+ code.error_goto_if_null(self.result(), self.pos)
+ ))
+ code.put_gotref(self.result())
+
+
class GeneralCallNode(CallNode):
# General Python function call, including keyword,
# * and ** arguments.
@@ -6036,15 +6390,7 @@
self.positional_args = self.positional_args.analyse_types(env)
self.positional_args = \
self.positional_args.coerce_to_pyobject(env)
- function = self.function
- if function.is_name and function.type_entry:
- # We are calling an extension type constructor. As long
- # as we do not support __new__(), the result type is clear
- self.type = function.type_entry.type
- self.result_ctype = py_object_type
- self.may_return_none = False
- else:
- self.type = py_object_type
+ self.set_py_result_type(self.function)
self.is_temp = 1
return self
@@ -6211,6 +6557,7 @@
# arg ExprNode
subexprs = ['arg']
+ is_temp = 1
def calculate_constant_result(self):
self.constant_result = tuple(self.arg.constant_result)
@@ -6227,7 +6574,6 @@
if self.arg.type is tuple_type:
return self.arg.as_none_safe_node("'NoneType' object is not iterable")
self.type = tuple_type
- self.is_temp = 1
return self
def may_be_none(self):
@@ -6237,10 +6583,11 @@
gil_message = "Constructing Python tuple"
def generate_result_code(self, code):
+ cfunc = "__Pyx_PySequence_Tuple" if self.arg.type in (py_object_type, tuple_type) else "PySequence_Tuple"
code.putln(
- "%s = PySequence_Tuple(%s); %s" % (
+ "%s = %s(%s); %s" % (
self.result(),
- self.arg.py_result(),
+ cfunc, self.arg.py_result(),
code.error_goto_if_null(self.result(), self.pos)))
code.put_gotref(self.py_result())
@@ -6759,7 +7106,7 @@
format_args = ()
if (self.obj.type.is_extension_type and self.needs_none_check and not
self.is_py_attr):
- msg = "'NoneType' object has no attribute '%s'"
+ msg = "'NoneType' object has no attribute '%{0}s'".format('.30' if len(self.attribute) <= 30 else '')
format_args = (self.attribute,)
elif self.obj.type.is_memoryviewslice:
if self.is_memslice_transpose:
@@ -7308,17 +7655,14 @@
code.putln("PyObject* sequence = %s;" % rhs.py_result())
# list/tuple => check size
- code.putln("#if !CYTHON_COMPILING_IN_PYPY")
- code.putln("Py_ssize_t size = Py_SIZE(sequence);")
- code.putln("#else")
- code.putln("Py_ssize_t size = PySequence_Size(sequence);") # < 0 => exception
- code.putln("#endif")
+ code.putln("Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);")
code.putln("if (unlikely(size != %d)) {" % len(self.args))
code.globalstate.use_utility_code(raise_too_many_values_to_unpack)
code.putln("if (size > %d) __Pyx_RaiseTooManyValuesError(%d);" % (
len(self.args), len(self.args)))
code.globalstate.use_utility_code(raise_need_more_values_to_unpack)
code.putln("else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);")
+ # < 0 => exception
code.putln(code.error_goto(self.pos))
code.putln("}")
@@ -7520,7 +7864,7 @@
code.put_decref(target_list, py_object_type)
code.putln('%s = %s; %s = NULL;' % (target_list, sublist_temp, sublist_temp))
code.putln('#else')
- code.putln('%s = %s;' % (sublist_temp, sublist_temp)) # avoid warning about unused variable
+ code.putln('(void)%s;' % sublist_temp) # avoid warning about unused variable
code.funcstate.release_temp(sublist_temp)
code.putln('#endif')
@@ -7549,10 +7893,10 @@
if self.mult_factor or not self.args:
return tuple_type
arg_types = [arg.infer_type(env) for arg in self.args]
- if any(type.is_pyobject or type.is_unspecified or type.is_fused for type in arg_types):
+ if any(type.is_pyobject or type.is_memoryviewslice or type.is_unspecified or type.is_fused
+ for type in arg_types):
return tuple_type
- else:
- return env.declare_tuple_type(self.pos, arg_types).type
+ return env.declare_tuple_type(self.pos, arg_types).type
def analyse_types(self, env, skip_children=False):
if len(self.args) == 0:
@@ -7566,7 +7910,8 @@
arg.starred_expr_allowed_here = True
self.args[i] = arg.analyse_types(env)
if (not self.mult_factor and
- not any((arg.is_starred or arg.type.is_pyobject or arg.type.is_fused) for arg in self.args)):
+ not any((arg.is_starred or arg.type.is_pyobject or arg.type.is_memoryviewslice or arg.type.is_fused)
+ for arg in self.args)):
self.type = env.declare_tuple_type(self.pos, (arg.type for arg in self.args)).type
self.is_temp = 1
return self
@@ -7649,26 +7994,26 @@
if len(self.args) == 0:
# result_code is Naming.empty_tuple
return
- if self.is_partly_literal:
- # underlying tuple is const, but factor is not
- tuple_target = code.get_py_const(py_object_type, 'tuple', cleanup_level=2)
- const_code = code.get_cached_constants_writer()
- const_code.mark_pos(self.pos)
- self.generate_sequence_packing_code(const_code, tuple_target, plain=True)
- const_code.put_giveref(tuple_target)
- code.putln('%s = PyNumber_Multiply(%s, %s); %s' % (
- self.result(), tuple_target, self.mult_factor.py_result(),
- code.error_goto_if_null(self.result(), self.pos)
+
+ if self.is_literal or self.is_partly_literal:
+ # The "mult_factor" is part of the deduplication if it is also constant, i.e. when
+ # we deduplicate the multiplied result. Otherwise, only deduplicate the constant part.
+ dedup_key = make_dedup_key(self.type, [self.mult_factor if self.is_literal else None] + self.args)
+ tuple_target = code.get_py_const(py_object_type, 'tuple', cleanup_level=2, dedup_key=dedup_key)
+ const_code = code.get_cached_constants_writer(tuple_target)
+ if const_code is not None:
+ # constant is not yet initialised
+ const_code.mark_pos(self.pos)
+ self.generate_sequence_packing_code(const_code, tuple_target, plain=not self.is_literal)
+ const_code.put_giveref(tuple_target)
+ if self.is_literal:
+ self.result_code = tuple_target
+ else:
+ code.putln('%s = PyNumber_Multiply(%s, %s); %s' % (
+ self.result(), tuple_target, self.mult_factor.py_result(),
+ code.error_goto_if_null(self.result(), self.pos)
))
- code.put_gotref(self.py_result())
- elif self.is_literal:
- # non-empty cached tuple => result is global constant,
- # creation code goes into separate code writer
- self.result_code = code.get_py_const(py_object_type, 'tuple', cleanup_level=2)
- code = code.get_cached_constants_writer()
- code.mark_pos(self.pos)
- self.generate_sequence_packing_code(code)
- code.put_giveref(self.py_result())
+ code.put_gotref(self.py_result())
else:
self.type.entry.used = True
self.generate_sequence_packing_code(code)
@@ -7690,7 +8035,7 @@
return ()
def infer_type(self, env):
- # TOOD: Infer non-object list arrays.
+ # TODO: Infer non-object list arrays.
return list_type
def analyse_expressions(self, env):
@@ -7701,11 +8046,10 @@
return node.coerce_to_pyobject(env)
def analyse_types(self, env):
- hold_errors()
- self.original_args = list(self.args)
- node = SequenceNode.analyse_types(self, env)
- node.obj_conversion_errors = held_errors()
- release_errors(ignore=True)
+ with local_errors(ignore=True) as errors:
+ self.original_args = list(self.args)
+ node = SequenceNode.analyse_types(self, env)
+ node.obj_conversion_errors = errors
if env.is_module_scope:
self.in_module_scope = True
node = node._create_merge_node_if_necessary(env)
@@ -7883,9 +8227,8 @@
code.putln('{ /* enter inner scope */')
py_entries = []
- for entry in self.expr_scope.var_entries:
+ for _, entry in sorted(item for item in self.expr_scope.entries.items() if item[0]):
if not entry.in_closure:
- code.put_var_declaration(entry)
if entry.type.is_pyobject and entry.used:
py_entries.append(entry)
if not py_entries:
@@ -7895,14 +8238,13 @@
return
# must free all local Python references at each exit point
- old_loop_labels = tuple(code.new_loop_labels())
+ old_loop_labels = code.new_loop_labels()
old_error_label = code.new_error_label()
generate_inner_evaluation_code(code)
# normal (non-error) exit
- for entry in py_entries:
- code.put_var_decref(entry)
+ self._generate_vars_cleanup(code, py_entries)
# error/loop body exit points
exit_scope = code.new_label('exit_scope')
@@ -7911,8 +8253,7 @@
list(zip(code.get_loop_labels(), old_loop_labels))):
if code.label_used(label):
code.put_label(label)
- for entry in py_entries:
- code.put_var_decref(entry)
+ self._generate_vars_cleanup(code, py_entries)
code.put_goto(old_label)
code.put_label(exit_scope)
code.putln('} /* exit inner scope */')
@@ -7920,6 +8261,14 @@
code.set_loop_labels(old_loop_labels)
code.error_label = old_error_label
+ def _generate_vars_cleanup(self, code, py_entries):
+ for entry in py_entries:
+ if entry.is_cglobal:
+ code.put_var_gotref(entry)
+ code.put_decref_set(entry.cname, "Py_None")
+ else:
+ code.put_var_xdecref_clear(entry)
+
class ComprehensionNode(ScopedExprNode):
# A list/set/dict comprehension
@@ -7927,6 +8276,7 @@
child_attrs = ["loop"]
is_temp = True
+ constant_result = not_a_constant
def infer_type(self, env):
return self.type
@@ -8350,15 +8700,16 @@
return ()
def infer_type(self, env):
- # TOOD: Infer struct constructors.
+ # TODO: Infer struct constructors.
return dict_type
def analyse_types(self, env):
- hold_errors()
- self.key_value_pairs = [ item.analyse_types(env)
- for item in self.key_value_pairs ]
- self.obj_conversion_errors = held_errors()
- release_errors(ignore=True)
+ with local_errors(ignore=True) as errors:
+ self.key_value_pairs = [
+ item.analyse_types(env)
+ for item in self.key_value_pairs
+ ]
+ self.obj_conversion_errors = errors
return self
def may_be_none(self):
@@ -8420,8 +8771,9 @@
if is_dict:
self.release_errors()
code.putln(
- "%s = PyDict_New(); %s" % (
+ "%s = __Pyx_PyDict_NewPresized(%d); %s" % (
self.result(),
+ len(self.key_value_pairs),
code.error_goto_if_null(self.result(), self.pos)))
code.put_gotref(self.py_result())
@@ -8782,9 +9134,6 @@
is_active = False
def analyse_expressions(self, env):
- if self.is_active:
- env.use_utility_code(
- UtilityCode.load_cached("CyFunctionClassCell", "CythonFunction.c"))
return self
def generate_evaluation_code(self, code):
@@ -8798,6 +9147,8 @@
def generate_injection_code(self, code, classobj_cname):
if self.is_active:
+ code.globalstate.use_utility_code(
+ UtilityCode.load_cached("CyFunctionClassCell", "CythonFunction.c"))
code.put_error_if_neg(self.pos, '__Pyx_CyFunction_InitClassCell(%s, %s)' % (
self.result(), classobj_cname))
@@ -8828,66 +9179,6 @@
code.put_incref(self.result(), py_object_type)
-class BoundMethodNode(ExprNode):
- # Helper class used in the implementation of Python
- # class definitions. Constructs an bound method
- # object from a class and a function.
- #
- # function ExprNode Function object
- # self_object ExprNode self object
-
- subexprs = ['function']
-
- def analyse_types(self, env):
- self.function = self.function.analyse_types(env)
- self.type = py_object_type
- self.is_temp = 1
- return self
-
- gil_message = "Constructing a bound method"
-
- def generate_result_code(self, code):
- code.putln(
- "%s = __Pyx_PyMethod_New(%s, %s, (PyObject*)%s->ob_type); %s" % (
- self.result(),
- self.function.py_result(),
- self.self_object.py_result(),
- self.self_object.py_result(),
- code.error_goto_if_null(self.result(), self.pos)))
- code.put_gotref(self.py_result())
-
-class UnboundMethodNode(ExprNode):
- # Helper class used in the implementation of Python
- # class definitions. Constructs an unbound method
- # object from a class and a function.
- #
- # function ExprNode Function object
-
- type = py_object_type
- is_temp = 1
-
- subexprs = ['function']
-
- def analyse_types(self, env):
- self.function = self.function.analyse_types(env)
- return self
-
- def may_be_none(self):
- return False
-
- gil_message = "Constructing an unbound method"
-
- def generate_result_code(self, code):
- class_cname = code.pyclass_stack[-1].classobj.result()
- code.putln(
- "%s = __Pyx_PyMethod_New(%s, 0, %s); %s" % (
- self.result(),
- self.function.py_result(),
- class_cname,
- code.error_goto_if_null(self.result(), self.pos)))
- code.put_gotref(self.py_result())
-
-
class PyCFunctionNode(ExprNode, ModuleNameMixin):
# Helper class used in the implementation of Python
# functions. Constructs a PyCFunction object
@@ -8966,22 +9257,19 @@
else:
default_args.append(arg)
if arg.annotation:
- arg.annotation = arg.annotation.analyse_types(env)
- if not arg.annotation.type.is_pyobject:
- arg.annotation = arg.annotation.coerce_to_pyobject(env)
+ arg.annotation = self.analyse_annotation(env, arg.annotation)
annotations.append((arg.pos, arg.name, arg.annotation))
for arg in (self.def_node.star_arg, self.def_node.starstar_arg):
if arg and arg.annotation:
- arg.annotation = arg.annotation.analyse_types(env)
- if not arg.annotation.type.is_pyobject:
- arg.annotation = arg.annotation.coerce_to_pyobject(env)
+ arg.annotation = self.analyse_annotation(env, arg.annotation)
annotations.append((arg.pos, arg.name, arg.annotation))
- if self.def_node.return_type_annotation:
- annotations.append((self.def_node.return_type_annotation.pos,
- StringEncoding.EncodedString("return"),
- self.def_node.return_type_annotation))
+ annotation = self.def_node.return_type_annotation
+ if annotation:
+ annotation = self.analyse_annotation(env, annotation)
+ self.def_node.return_type_annotation = annotation
+ annotations.append((annotation.pos, StringEncoding.EncodedString("return"), annotation))
if nonliteral_objects or nonliteral_other:
module_scope = env.global_scope()
@@ -8996,7 +9284,7 @@
for arg in nonliteral_other:
entry = scope.declare_var(arg.name, arg.type, None,
Naming.arg_prefix + arg.name,
- allow_pyobject=False)
+ allow_pyobject=False, allow_memoryview=True)
self.defaults.append((arg, entry))
entry = module_scope.declare_struct_or_union(
None, 'struct', scope, 1, None, cname=cname)
@@ -9058,6 +9346,20 @@
for pos, name, value in annotations])
self.annotations_dict = annotations_dict.analyse_types(env)
+ def analyse_annotation(self, env, annotation):
+ if annotation is None:
+ return None
+ atype = annotation.analyse_as_type(env)
+ if atype is not None:
+ # Keep parsed types as strings as they might not be Python representable.
+ annotation = UnicodeNode(
+ annotation.pos,
+ value=StringEncoding.EncodedString(atype.declaration_code('', for_display=True)))
+ annotation = annotation.analyse_types(env)
+ if not annotation.type.is_pyobject:
+ annotation = annotation.coerce_to_pyobject(env)
+ return annotation
+
def may_be_none(self):
return False
@@ -9164,7 +9466,8 @@
if self.defaults_kwdict:
code.putln('__Pyx_CyFunction_SetDefaultsKwDict(%s, %s);' % (
self.result(), self.defaults_kwdict.py_result()))
- if def_node.defaults_getter:
+ if def_node.defaults_getter and not self.specialized_cpdefs:
+ # Fused functions do not support dynamic defaults, only their specialisations can have them for now.
code.putln('__Pyx_CyFunction_SetDefaultsGetter(%s, %s);' % (
self.result(), def_node.defaults_getter.entry.pyfunc_cname))
if self.annotations_dict:
@@ -9219,7 +9522,9 @@
if self.result_code is None:
self.result_code = code.get_py_const(py_object_type, 'codeobj', cleanup_level=2)
- code = code.get_cached_constants_writer()
+ code = code.get_cached_constants_writer(self.result_code)
+ if code is None:
+ return # already initialised
code.mark_pos(self.pos)
func = self.def_node
func_name = code.get_py_string_const(
@@ -9228,7 +9533,9 @@
file_path = StringEncoding.bytes_literal(func.pos[0].get_filenametable_entry().encode('utf8'), 'utf8')
file_path_const = code.get_py_string_const(file_path, identifier=False, is_str=True)
- flags = []
+ # This combination makes CPython create a new dict for "frame.f_locals" (see GH #1836).
+ flags = ['CO_OPTIMIZED', 'CO_NEWLOCALS']
+
if self.def_node.star_arg:
flags.append('CO_VARARGS')
if self.def_node.starstar_arg:
@@ -9415,10 +9722,11 @@
label_num = 0
is_yield_from = False
is_await = False
+ in_async_gen = False
expr_keyword = 'yield'
def analyse_types(self, env):
- if not self.label_num:
+ if not self.label_num or (self.is_yield_from and self.in_async_gen):
error(self.pos, "'%s' not supported here" % self.expr_keyword)
self.is_temp = 1
if self.arg is not None:
@@ -9449,7 +9757,8 @@
Generate the code to return the argument in 'Naming.retval_cname'
and to continue at the yield label.
"""
- label_num, label_name = code.new_yield_label()
+ label_num, label_name = code.new_yield_label(
+ self.expr_keyword.replace(' ', '_'))
code.use_label(label_name)
saved = []
@@ -9469,10 +9778,23 @@
nogil=not code.funcstate.gil_owned)
code.put_finish_refcount_context()
- code.putln("/* return from generator, yielding value */")
+ if code.funcstate.current_except is not None:
+ # inside of an except block => save away currently handled exception
+ code.putln("__Pyx_Coroutine_SwapException(%s);" % Naming.generator_cname)
+ else:
+ # no exceptions being handled => restore exception state of caller
+ code.putln("__Pyx_Coroutine_ResetAndClearException(%s);" % Naming.generator_cname)
+
+ code.putln("/* return from %sgenerator, %sing value */" % (
+ 'async ' if self.in_async_gen else '',
+ 'await' if self.is_await else 'yield'))
code.putln("%s->resume_label = %d;" % (
Naming.generator_cname, label_num))
- code.putln("return %s;" % Naming.retval_cname)
+ if self.in_async_gen and not self.is_await:
+ # __Pyx__PyAsyncGenValueWrapperNew() steals a reference to the return value
+ code.putln("return __Pyx__PyAsyncGenValueWrapperNew(%s);" % Naming.retval_cname)
+ else:
+ code.putln("return %s;" % Naming.retval_cname)
code.put_label(label_name)
for cname, save_cname, type in saved:
@@ -9480,27 +9802,19 @@
if type.is_pyobject:
code.putln('%s->%s = 0;' % (Naming.cur_scope_cname, save_cname))
code.put_xgotref(cname)
- code.putln(code.error_goto_if_null(Naming.sent_value_cname, self.pos))
+ self.generate_sent_value_handling_code(code, Naming.sent_value_cname)
if self.result_is_used:
self.allocate_temp_result(code)
code.put('%s = %s; ' % (self.result(), Naming.sent_value_cname))
code.put_incref(self.result(), py_object_type)
+ def generate_sent_value_handling_code(self, code, value_cname):
+ code.putln(code.error_goto_if_null(value_cname, self.pos))
-class YieldFromExprNode(YieldExprNode):
- # "yield from GEN" expression
- is_yield_from = True
- expr_keyword = 'yield from'
-
- def coerce_yield_argument(self, env):
- if not self.arg.type.is_string:
- # FIXME: support C arrays and C++ iterators?
- error(self.pos, "yielding from non-Python object not supported")
- self.arg = self.arg.coerce_to_pyobject(env)
+class _YieldDelegationExprNode(YieldExprNode):
def yield_from_func(self, code):
- code.globalstate.use_utility_code(UtilityCode.load_cached("GeneratorYieldFrom", "Coroutine.c"))
- return "__Pyx_Generator_Yield_From"
+ raise NotImplementedError()
def generate_evaluation_code(self, code, source_cname=None, decref_source=False):
if source_cname is None:
@@ -9534,15 +9848,31 @@
code.put_gotref(self.result())
def handle_iteration_exception(self, code):
- code.putln("PyObject* exc_type = PyErr_Occurred();")
+ code.putln("PyObject* exc_type = __Pyx_PyErr_Occurred();")
code.putln("if (exc_type) {")
- code.putln("if (likely(exc_type == PyExc_StopIteration ||"
- " PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();")
+ code.putln("if (likely(exc_type == PyExc_StopIteration || (exc_type != PyExc_GeneratorExit &&"
+ " __Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration)))) PyErr_Clear();")
code.putln("else %s" % code.error_goto(self.pos))
code.putln("}")
-class AwaitExprNode(YieldFromExprNode):
+class YieldFromExprNode(_YieldDelegationExprNode):
+ # "yield from GEN" expression
+ is_yield_from = True
+ expr_keyword = 'yield from'
+
+ def coerce_yield_argument(self, env):
+ if not self.arg.type.is_string:
+ # FIXME: support C arrays and C++ iterators?
+ error(self.pos, "yielding from non-Python object not supported")
+ self.arg = self.arg.coerce_to_pyobject(env)
+
+ def yield_from_func(self, code):
+ code.globalstate.use_utility_code(UtilityCode.load_cached("GeneratorYieldFrom", "Coroutine.c"))
+ return "__Pyx_Generator_Yield_From"
+
+
+class AwaitExprNode(_YieldDelegationExprNode):
# 'await' expression node
#
# arg ExprNode the Awaitable value to await
@@ -9561,29 +9891,34 @@
return "__Pyx_Coroutine_Yield_From"
-class AIterAwaitExprNode(AwaitExprNode):
- # 'await' expression node used in async-for loops to support the pre-Py3.5.2 'aiter' protocol
- def yield_from_func(self, code):
- code.globalstate.use_utility_code(UtilityCode.load_cached("CoroutineAIterYieldFrom", "Coroutine.c"))
- return "__Pyx_Coroutine_AIter_Yield_From"
-
-
class AwaitIterNextExprNode(AwaitExprNode):
# 'await' expression node as part of 'async for' iteration
#
# Breaks out of loop on StopAsyncIteration exception.
- def fetch_iteration_result(self, code):
- assert code.break_label, "AwaitIterNextExprNode outside of 'async for' loop"
+ def _generate_break(self, code):
code.globalstate.use_utility_code(UtilityCode.load_cached("StopAsyncIteration", "Coroutine.c"))
- code.putln("PyObject* exc_type = PyErr_Occurred();")
- code.putln("if (exc_type && likely(exc_type == __Pyx_PyExc_StopAsyncIteration ||"
- " PyErr_GivenExceptionMatches(exc_type, __Pyx_PyExc_StopAsyncIteration))) {")
+ code.putln("PyObject* exc_type = __Pyx_PyErr_Occurred();")
+ code.putln("if (unlikely(exc_type && (exc_type == __Pyx_PyExc_StopAsyncIteration || ("
+ " exc_type != PyExc_StopIteration && exc_type != PyExc_GeneratorExit &&"
+ " __Pyx_PyErr_GivenExceptionMatches(exc_type, __Pyx_PyExc_StopAsyncIteration))))) {")
code.putln("PyErr_Clear();")
code.putln("break;")
code.putln("}")
+
+ def fetch_iteration_result(self, code):
+ assert code.break_label, "AwaitIterNextExprNode outside of 'async for' loop"
+ self._generate_break(code)
super(AwaitIterNextExprNode, self).fetch_iteration_result(code)
+ def generate_sent_value_handling_code(self, code, value_cname):
+ assert code.break_label, "AwaitIterNextExprNode outside of 'async for' loop"
+ code.putln("if (unlikely(!%s)) {" % value_cname)
+ self._generate_break(code)
+ # all non-break exceptions are errors, as in parent class
+ code.putln(code.error_goto(self.pos))
+ code.putln("}")
+
class GlobalsExprNode(AtomicExprNode):
type = dict_type
@@ -9779,6 +10114,7 @@
if self.is_cpp_operation() and self.exception_check == '+':
translate_cpp_exception(code, self.pos,
"%s = %s %s;" % (self.result(), self.operator, self.operand.result()),
+ self.result() if self.type.is_pyobject else None,
self.exception_value, self.in_nogil_context)
else:
code.putln("%s = %s %s;" % (self.result(), self.operator, self.operand.result()))
@@ -10018,6 +10354,7 @@
if (self.operand.type.is_cpp_class and self.exception_check == '+'):
translate_cpp_exception(code, self.pos,
"%s = %s %s;" % (self.result(), self.operator, self.operand.result()),
+ self.result() if self.type.is_pyobject else None,
self.exception_value, self.in_nogil_context)
@@ -10095,7 +10432,8 @@
error(self.pos, "Python objects cannot be cast from pointers of primitive types")
else:
# Should this be an error?
- warning(self.pos, "No conversion from %s to %s, python object pointer used." % (self.operand.type, self.type))
+ warning(self.pos, "No conversion from %s to %s, python object pointer used." % (
+ self.operand.type, self.type))
self.operand = self.operand.coerce_to_simple(env)
elif from_py and not to_py:
if self.type.create_from_py_utility_code(env):
@@ -10104,7 +10442,8 @@
if not (self.type.base_type.is_void or self.type.base_type.is_struct):
error(self.pos, "Python objects cannot be cast to pointers of primitive types")
else:
- warning(self.pos, "No conversion from %s to %s, python object pointer used." % (self.type, self.operand.type))
+ warning(self.pos, "No conversion from %s to %s, python object pointer used." % (
+ self.type, self.operand.type))
elif from_py and to_py:
if self.typecheck:
self.operand = PyTypeTestNode(self.operand, self.type, env, notnone=True)
@@ -10116,6 +10455,13 @@
elif self.operand.type.is_fused:
self.operand = self.operand.coerce_to(self.type, env)
#self.type = self.operand.type
+ if self.type.is_ptr and self.type.base_type.is_cfunction and self.type.base_type.nogil:
+ op_type = self.operand.type
+ if op_type.is_ptr:
+ op_type = op_type.base_type
+ if op_type.is_cfunction and not op_type.nogil:
+ warning(self.pos,
+ "Casting a GIL-requiring function into a nogil function circumvents GIL validation", 1)
return self
def is_simple(self):
@@ -10322,7 +10668,7 @@
def allocate_temp_result(self, code):
if self.temp_code:
- raise RuntimeError("temp allocated mulitple times")
+ raise RuntimeError("temp allocated multiple times")
self.temp_code = code.funcstate.allocate_temp(self.type, True)
@@ -10438,9 +10784,7 @@
for attr in path[1:]:
operand = AttributeNode(pos=self.pos, obj=operand, attribute=attr)
operand = AttributeNode(pos=self.pos, obj=operand, attribute=self.base_type.name)
- self.operand = operand
- self.__class__ = SizeofVarNode
- node = self.analyse_types(env)
+ node = SizeofVarNode(self.pos, operand=operand).analyse_types(env)
return node
if self.arg_type is None:
base_type = self.base_type.analyse(env)
@@ -10564,7 +10908,7 @@
arg_code = self.arg_type.result()
translate_cpp_exception(code, self.pos,
"%s = typeid(%s);" % (self.temp_code, arg_code),
- None, self.in_nogil_context)
+ None, None, self.in_nogil_context)
class TypeofNode(ExprNode):
# Compile-time type of an expression, as a string.
@@ -10585,6 +10929,10 @@
self.literal = literal.coerce_to_pyobject(env)
return self
+ def analyse_as_type(self, env):
+ self.operand = self.operand.analyse_types(env)
+ return self.operand.type
+
def may_be_none(self):
return False
@@ -10792,12 +11140,19 @@
if self.type.is_pythran_expr:
code.putln("// Pythran binop")
code.putln("__Pyx_call_destructor(%s);" % self.result())
- code.putln("new (&%s) decltype(%s){%s %s %s};" % (
- self.result(),
- self.result(),
- self.operand1.pythran_result(),
- self.operator,
- self.operand2.pythran_result()))
+ if self.operator == '**':
+ code.putln("new (&%s) decltype(%s){pythonic::numpy::functor::power{}(%s, %s)};" % (
+ self.result(),
+ self.result(),
+ self.operand1.pythran_result(),
+ self.operand2.pythran_result()))
+ else:
+ code.putln("new (&%s) decltype(%s){%s %s %s};" % (
+ self.result(),
+ self.result(),
+ self.operand1.pythran_result(),
+ self.operator,
+ self.operand2.pythran_result()))
elif self.operand1.type.is_pyobject:
function = self.py_operation_function(code)
if self.operator == '**':
@@ -10819,6 +11174,7 @@
if self.is_cpp_operation() and self.exception_check == '+':
translate_cpp_exception(code, self.pos,
"%s = %s;" % (self.result(), self.calculate_result_code()),
+ self.result() if self.type.is_pyobject else None,
self.exception_value, self.in_nogil_context)
else:
code.putln("%s = %s;" % (self.result(), self.calculate_result_code()))
@@ -10853,9 +11209,8 @@
cpp_type = None
if type1.is_cpp_class or type1.is_ptr:
cpp_type = type1.find_cpp_operation_type(self.operator, type2)
- # FIXME: handle the reversed case?
- #if cpp_type is None and (type2.is_cpp_class or type2.is_ptr):
- # cpp_type = type2.find_cpp_operation_type(self.operator, type1)
+ if cpp_type is None and (type2.is_cpp_class or type2.is_ptr):
+ cpp_type = type2.find_cpp_operation_type(self.operator, type1)
# FIXME: do we need to handle other cases here?
return cpp_type
@@ -10960,10 +11315,11 @@
self.operand2.result(),
self.overflow_bit_node.overflow_bit)
elif self.type.is_cpp_class or self.infix:
- return "(%s %s %s)" % (
- self.operand1.result(),
- self.operator,
- self.operand2.result())
+ if is_pythran_expr(self.type):
+ result1, result2 = self.operand1.pythran_result(), self.operand2.pythran_result()
+ else:
+ result1, result2 = self.operand1.result(), self.operand2.result()
+ return "(%s %s %s)" % (result1, self.operator, result2)
else:
func = self.type.binary_op(self.operator)
if func is None:
@@ -11029,7 +11385,7 @@
def infer_builtin_types_operation(self, type1, type2):
# b'abc' + 'abc' raises an exception in Py3,
# so we can safely infer the Py2 type for bytes here
- string_types = (bytes_type, str_type, basestring_type, unicode_type)
+ string_types = (bytes_type, bytearray_type, str_type, basestring_type, unicode_type)
if type1 in string_types and type2 in string_types:
return string_types[max(string_types.index(type1),
string_types.index(type2))]
@@ -11088,7 +11444,7 @@
def infer_builtin_types_operation(self, type1, type2):
# let's assume that whatever builtin type you multiply a string with
# will either return a string of the same type or fail with an exception
- string_types = (bytes_type, str_type, basestring_type, unicode_type)
+ string_types = (bytes_type, bytearray_type, str_type, basestring_type, unicode_type)
if type1 in string_types and type2.is_builtin_type:
return type1
if type2 in string_types and type1.is_builtin_type:
@@ -11176,7 +11532,7 @@
self.operand2 = self.operand2.coerce_to_simple(env)
def compute_c_result_type(self, type1, type2):
- if self.operator == '/' and self.ctruedivision:
+ if self.operator == '/' and self.ctruedivision and not type1.is_cpp_class and not type2.is_cpp_class:
if not type1.is_float and not type2.is_float:
widest_type = PyrexTypes.widest_numeric_type(type1, PyrexTypes.c_double_type)
widest_type = PyrexTypes.widest_numeric_type(type2, widest_type)
@@ -11192,9 +11548,11 @@
def generate_evaluation_code(self, code):
if not self.type.is_pyobject and not self.type.is_complex:
if self.cdivision is None:
- self.cdivision = (code.globalstate.directives['cdivision']
- or not self.type.signed
- or self.type.is_float)
+ self.cdivision = (
+ code.globalstate.directives['cdivision']
+ or self.type.is_float
+ or ((self.type.is_numeric or self.type.is_enum) and not self.type.signed)
+ )
if not self.cdivision:
code.globalstate.use_utility_code(
UtilityCode.load_cached("DivInt", "CMath.c").specialize(self.type))
@@ -11265,7 +11623,7 @@
code.putln("}")
def calculate_result_code(self):
- if self.type.is_complex:
+ if self.type.is_complex or self.is_cpp_operation():
return NumBinopNode.calculate_result_code(self)
elif self.type.is_float and self.operator == '//':
return "floor(%s / %s)" % (
@@ -11287,6 +11645,20 @@
self.operand2.result())
+_find_formatting_types = re.compile(
+ br"%"
+ br"(?:%|" # %%
+ br"(?:\([^)]+\))?" # %(name)
+ br"[-+#,0-9 ]*([a-z])" # %.2f etc.
+ br")").findall
+
+# These format conversion types can never trigger a Unicode string conversion in Py2.
+_safe_bytes_formats = set([
+ # Excludes 's' and 'r', which can generate non-bytes strings.
+ b'd', b'i', b'o', b'u', b'x', b'X', b'e', b'E', b'f', b'F', b'g', b'G', b'c', b'b', b'a',
+])
+
+
class ModNode(DivNode):
# '%' operator.
@@ -11296,7 +11668,7 @@
or NumBinopNode.is_py_operation_types(self, type1, type2))
def infer_builtin_types_operation(self, type1, type2):
- # b'%s' % xyz raises an exception in Py3, so it's safe to infer the type for Py2
+ # b'%s' % xyz raises an exception in Py3<3.5, so it's safe to infer the type for Py2 and later Py3's.
if type1 is unicode_type:
# None + xyz may be implemented by RHS
if type2.is_builtin_type or not self.operand1.may_be_none():
@@ -11306,6 +11678,11 @@
return type2
elif type2.is_numeric:
return type1
+ elif self.operand1.is_string_literal:
+ if type1 is str_type or type1 is bytes_type:
+ if set(_find_formatting_types(self.operand1.value)) <= _safe_bytes_formats:
+ return type1
+ return basestring_type
elif type1 is bytes_type and not type2.is_builtin_type:
return None # RHS might implement '% operator differently in Py3
else:
@@ -11357,13 +11734,19 @@
self.operand2.result())
def py_operation_function(self, code):
- if self.operand1.type is unicode_type:
- if self.operand1.may_be_none():
+ type1, type2 = self.operand1.type, self.operand2.type
+ # ("..." % x) must call "x.__rmod__()" for string subtypes.
+ if type1 is unicode_type:
+ if self.operand1.may_be_none() or (
+ type2.is_extension_type and type2.subtype_of(type1) or
+ type2 is py_object_type and not isinstance(self.operand2, CoerceToPyTypeNode)):
return '__Pyx_PyUnicode_FormatSafe'
else:
return 'PyUnicode_Format'
- elif self.operand1.type is str_type:
- if self.operand1.may_be_none():
+ elif type1 is str_type:
+ if self.operand1.may_be_none() or (
+ type2.is_extension_type and type2.subtype_of(type1) or
+ type2 is py_object_type and not isinstance(self.operand2, CoerceToPyTypeNode)):
return '__Pyx_PyString_FormatSafe'
else:
return '__Pyx_PyString_Format'
@@ -11504,7 +11887,7 @@
operator=self.operator,
operand1=operand1, operand2=operand2)
- def generate_bool_evaluation_code(self, code, final_result_temp, and_label, or_label, end_label, fall_through):
+ def generate_bool_evaluation_code(self, code, final_result_temp, final_result_type, and_label, or_label, end_label, fall_through):
code.mark_pos(self.pos)
outer_labels = (and_label, or_label)
@@ -11513,19 +11896,20 @@
else:
my_label = or_label = code.new_label('next_or')
self.operand1.generate_bool_evaluation_code(
- code, final_result_temp, and_label, or_label, end_label, my_label)
+ code, final_result_temp, final_result_type, and_label, or_label, end_label, my_label)
and_label, or_label = outer_labels
code.put_label(my_label)
self.operand2.generate_bool_evaluation_code(
- code, final_result_temp, and_label, or_label, end_label, fall_through)
+ code, final_result_temp, final_result_type, and_label, or_label, end_label, fall_through)
def generate_evaluation_code(self, code):
self.allocate_temp_result(code)
+ result_type = PyrexTypes.py_object_type if self.type.is_pyobject else self.type
or_label = and_label = None
end_label = code.new_label('bool_binop_done')
- self.generate_bool_evaluation_code(code, self.result(), and_label, or_label, end_label, end_label)
+ self.generate_bool_evaluation_code(code, self.result(), result_type, and_label, or_label, end_label, end_label)
code.put_label(end_label)
gil_message = "Truth-testing Python object"
@@ -11610,7 +11994,7 @@
test_result = self.arg.result()
return (test_result, self.arg.type.is_pyobject)
- def generate_bool_evaluation_code(self, code, final_result_temp, and_label, or_label, end_label, fall_through):
+ def generate_bool_evaluation_code(self, code, final_result_temp, final_result_type, and_label, or_label, end_label, fall_through):
code.mark_pos(self.pos)
# x => x
@@ -11653,7 +12037,7 @@
code.putln("} else {")
self.value.generate_evaluation_code(code)
self.value.make_owned_reference(code)
- code.putln("%s = %s;" % (final_result_temp, self.value.result()))
+ code.putln("%s = %s;" % (final_result_temp, self.value.result_as(final_result_type)))
self.value.generate_post_assignment_code(code)
# disposal: {not (and_label and or_label) [else]}
self.arg.generate_disposal_code(code)
@@ -11675,6 +12059,7 @@
true_val = None
false_val = None
+ is_temp = True
subexprs = ['test', 'true_val', 'false_val']
@@ -11699,7 +12084,6 @@
self.test = self.test.analyse_types(env).coerce_to_boolean(env)
self.true_val = self.true_val.analyse_types(env)
self.false_val = self.false_val.analyse_types(env)
- self.is_temp = 1
return self.analyse_result_type(env)
def analyse_result_type(self, env):
@@ -12014,6 +12398,11 @@
self.special_bool_cmp_utility_code = UtilityCode.load_cached("PyDictContains", "ObjectHandling.c")
self.special_bool_cmp_function = "__Pyx_PyDict_ContainsTF"
return True
+ elif self.operand2.type is Builtin.set_type:
+ self.operand2 = self.operand2.as_none_safe_node("'NoneType' object is not iterable")
+ self.special_bool_cmp_utility_code = UtilityCode.load_cached("PySetContains", "ObjectHandling.c")
+ self.special_bool_cmp_function = "__Pyx_PySet_ContainsTF"
+ return True
elif self.operand2.type is Builtin.unicode_type:
self.operand2 = self.operand2.as_none_safe_node("'NoneType' object is not iterable")
self.special_bool_cmp_utility_code = UtilityCode.load_cached("PyUnicodeContains", "StringTools.c")
@@ -12101,7 +12490,13 @@
self.c_operator(op),
code2)
if self.is_cpp_comparison() and self.exception_check == '+':
- translate_cpp_exception(code, self.pos, statement, self.exception_value, self.in_nogil_context)
+ translate_cpp_exception(
+ code,
+ self.pos,
+ statement,
+ result_code if self.type.is_pyobject else None,
+ self.exception_value,
+ self.in_nogil_context)
code.putln(statement)
def c_operator(self, op):
@@ -12133,7 +12528,14 @@
is_memslice_nonecheck = False
def infer_type(self, env):
- # TODO: Actually implement this (after merging with -unstable).
+ type1 = self.operand1.infer_type(env)
+ type2 = self.operand2.infer_type(env)
+
+ if is_pythran_expr(type1) or is_pythran_expr(type2):
+ if is_pythran_supported_type(type1) and is_pythran_supported_type(type2):
+ return PythranExpr(pythran_binop_type(self.operator, type1, type2))
+
+ # TODO: implement this for other types.
return py_object_type
def type_dependencies(self, env):
@@ -12303,18 +12705,19 @@
return self.operand1.check_const() and self.operand2.check_const()
def calculate_result_code(self):
- if self.operand1.type.is_complex:
+ operand1, operand2 = self.operand1, self.operand2
+ if operand1.type.is_complex:
if self.operator == "!=":
negation = "!"
else:
negation = ""
return "(%s%s(%s, %s))" % (
negation,
- self.operand1.type.binary_op('=='),
- self.operand1.result(),
- self.operand2.result())
+ operand1.type.binary_op('=='),
+ operand1.result(),
+ operand2.result())
elif self.is_c_string_contains():
- if self.operand2.type is unicode_type:
+ if operand2.type is unicode_type:
method = "__Pyx_UnicodeContainsUCS4"
else:
method = "__Pyx_BytesContains"
@@ -12325,16 +12728,18 @@
return "(%s%s(%s, %s))" % (
negation,
method,
- self.operand2.result(),
- self.operand1.result())
+ operand2.result(),
+ operand1.result())
else:
- result1 = self.operand1.result()
- result2 = self.operand2.result()
- if self.is_memslice_nonecheck:
- if self.operand1.type.is_memoryviewslice:
- result1 = "((PyObject *) %s.memview)" % result1
- else:
- result2 = "((PyObject *) %s.memview)" % result2
+ if is_pythran_expr(self.type):
+ result1, result2 = operand1.pythran_result(), operand2.pythran_result()
+ else:
+ result1, result2 = operand1.result(), operand2.result()
+ if self.is_memslice_nonecheck:
+ if operand1.type.is_memoryviewslice:
+ result1 = "((PyObject *) %s.memview)" % result1
+ else:
+ result2 = "((PyObject *) %s.memview)" % result2
return "(%s %s %s)" % (
result1,
@@ -12556,12 +12961,12 @@
def generate_result_code(self, code):
self.type.create_from_py_utility_code(self.env)
- code.putln("%s = %s(%s);" % (self.result(),
- self.type.from_py_function,
- self.arg.py_result()))
-
- error_cond = self.type.error_condition(self.result())
- code.putln(code.error_goto_if(error_cond, self.pos))
+ code.putln(self.type.from_py_call_code(
+ self.arg.py_result(),
+ self.result(),
+ self.pos,
+ code
+ ))
class CastNode(CoercionNode):
@@ -12620,6 +13025,15 @@
def nonlocally_immutable(self):
return self.arg.nonlocally_immutable()
+ def reanalyse(self):
+ if self.type != self.arg.type or not self.arg.is_temp:
+ return self
+ if not self.type.typeobj_is_available():
+ return self
+ if self.arg.may_be_none() and self.notnone:
+ return self.arg.as_none_safe_node("Cannot convert NoneType to %.200s" % self.type.name)
+ return self.arg
+
def calculate_constant_result(self):
# FIXME
pass
@@ -12659,7 +13073,7 @@
is_nonecheck = True
def __init__(self, arg, exception_type_cname, exception_message,
- exception_format_args):
+ exception_format_args=()):
CoercionNode.__init__(self, arg)
self.type = arg.type
self.result_ctype = arg.ctype()
@@ -12695,6 +13109,19 @@
else:
raise Exception("unsupported type")
+ @classmethod
+ def generate(cls, arg, code, exception_message,
+ exception_type_cname="PyExc_TypeError", exception_format_args=(), in_nogil_context=False):
+ node = cls(arg, exception_type_cname, exception_message, exception_format_args)
+ node.in_nogil_context = in_nogil_context
+ node.put_nonecheck(code)
+
+ @classmethod
+ def generate_if_needed(cls, arg, code, exception_message,
+ exception_type_cname="PyExc_TypeError", exception_format_args=(), in_nogil_context=False):
+ if arg.may_be_none():
+ cls.generate(arg, code, exception_message, exception_type_cname, exception_format_args, in_nogil_context)
+
def put_nonecheck(self, code):
code.putln(
"if (unlikely(%s == Py_None)) {" % self.condition())
@@ -12869,8 +13296,15 @@
return (self.type.is_ptr and not self.type.is_array) and self.arg.is_ephemeral()
def generate_result_code(self, code):
+ from_py_function = None
+ # for certain source types, we can do better than the generic coercion
+ if self.type.is_string and self.arg.type is bytes_type:
+ if self.type.from_py_function.startswith('__Pyx_PyObject_As'):
+ from_py_function = '__Pyx_PyBytes' + self.type.from_py_function[len('__Pyx_PyObject'):]
+ NoneCheckNode.generate_if_needed(self.arg, code, "expected bytes, NoneType found")
+
code.putln(self.type.from_py_call_code(
- self.arg.py_result(), self.result(), self.pos, code))
+ self.arg.py_result(), self.result(), self.pos, code, from_py_function=from_py_function))
if self.type.is_pyobject:
code.put_gotref(self.py_result())
@@ -12890,6 +13324,7 @@
Builtin.set_type: 'PySet_GET_SIZE',
Builtin.frozenset_type: 'PySet_GET_SIZE',
Builtin.bytes_type: 'PyBytes_GET_SIZE',
+ Builtin.bytearray_type: 'PyByteArray_GET_SIZE',
Builtin.unicode_type: '__Pyx_PyUnicode_IS_TRUE',
}
@@ -12918,11 +13353,9 @@
return
test_func = self._special_builtins.get(self.arg.type)
if test_func is not None:
- code.putln("%s = (%s != Py_None) && (%s(%s) != 0);" % (
- self.result(),
- self.arg.py_result(),
- test_func,
- self.arg.py_result()))
+ checks = ["(%s != Py_None)" % self.arg.py_result()] if self.arg.may_be_none() else []
+ checks.append("(%s(%s) != 0)" % (test_func, self.arg.py_result()))
+ code.putln("%s = %s;" % (self.result(), '&&'.join(checks)))
else:
code.putln(
"%s = __Pyx_PyObject_IsTrue(%s); %s" % (
diff -Nru cython-0.26.1/Cython/Compiler/FlowControl.pxd cython-0.29.14/Cython/Compiler/FlowControl.pxd
--- cython-0.26.1/Cython/Compiler/FlowControl.pxd 2015-09-10 16:25:36.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/FlowControl.pxd 2019-04-14 10:00:58.000000000 +0000
@@ -11,10 +11,8 @@
cdef public list stats
cdef public dict gen
cdef public set bounded
- cdef public dict input
- cdef public dict output
- # Big integer it bitsets
+ # Big integer bitsets
cdef public object i_input
cdef public object i_output
cdef public object i_gen
diff -Nru cython-0.26.1/Cython/Compiler/FlowControl.py cython-0.29.14/Cython/Compiler/FlowControl.py
--- cython-0.26.1/Cython/Compiler/FlowControl.py 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/FlowControl.py 2018-09-22 14:18:56.000000000 +0000
@@ -341,14 +341,6 @@
return self.entry.type
return self.inferred_type
- def __getstate__(self):
- return (self.lhs, self.rhs, self.entry, self.pos,
- self.refs, self.is_arg, self.is_deletion, self.inferred_type)
-
- def __setstate__(self, state):
- (self.lhs, self.rhs, self.entry, self.pos,
- self.refs, self.is_arg, self.is_deletion, self.inferred_type) = state
-
class StaticAssignment(NameAssignment):
"""Initialised at declaration time, e.g. stack allocation."""
diff -Nru cython-0.26.1/Cython/Compiler/FusedNode.py cython-0.29.14/Cython/Compiler/FusedNode.py
--- cython-0.26.1/Cython/Compiler/FusedNode.py 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/FusedNode.py 2019-11-01 14:13:39.000000000 +0000
@@ -127,9 +127,6 @@
# len(permutations))
# import pprint; pprint.pprint([d for cname, d in permutations])
- if self.node.entry in env.cfunc_entries:
- env.cfunc_entries.remove(self.node.entry)
-
# Prevent copying of the python function
self.orig_py_func = orig_py_func = self.node.py_func
self.node.py_func = None
@@ -139,12 +136,26 @@
fused_types = self.node.type.get_fused_types()
self.fused_compound_types = fused_types
+ new_cfunc_entries = []
for cname, fused_to_specific in permutations:
copied_node = copy.deepcopy(self.node)
- # Make the types in our CFuncType specific
+ # Make the types in our CFuncType specific.
type = copied_node.type.specialize(fused_to_specific)
entry = copied_node.entry
+ type.specialize_entry(entry, cname)
+
+ # Reuse existing Entries (e.g. from .pxd files).
+ for i, orig_entry in enumerate(env.cfunc_entries):
+ if entry.cname == orig_entry.cname and type.same_as_resolved_type(orig_entry.type):
+ copied_node.entry = env.cfunc_entries[i]
+ if not copied_node.entry.func_cname:
+ copied_node.entry.func_cname = entry.func_cname
+ entry = copied_node.entry
+ type = entry.type
+ break
+ else:
+ new_cfunc_entries.append(entry)
copied_node.type = type
entry.type, type.entry = type, entry
@@ -165,9 +176,6 @@
self._specialize_function_args(copied_node.cfunc_declarator.args,
fused_to_specific)
- type.specialize_entry(entry, cname)
- env.cfunc_entries.append(entry)
-
# If a cpdef, declare all specialized cpdefs (this
# also calls analyse_declarations)
copied_node.declare_cpdef_wrapper(env)
@@ -181,6 +189,14 @@
if not self.replace_fused_typechecks(copied_node):
break
+ # replace old entry with new entries
+ try:
+ cindex = env.cfunc_entries.index(self.node.entry)
+ except ValueError:
+ env.cfunc_entries.extend(new_cfunc_entries)
+ else:
+ env.cfunc_entries[cindex:cindex+1] = new_cfunc_entries
+
if orig_py_func:
self.py_func = self.make_fused_cpdef(orig_py_func, env,
is_def=False)
@@ -209,7 +225,7 @@
"""
Create a new local scope for the copied node and append it to
self.nodes. A new local scope is needed because the arguments with the
- fused types are aready in the local scope, and we need the specialized
+ fused types are already in the local scope, and we need the specialized
entries created after analyse_declarations on each specialized version
of the (CFunc)DefNode.
f2s is a dict mapping each fused type to its specialized version
@@ -260,7 +276,7 @@
def _fused_instance_checks(self, normal_types, pyx_code, env):
"""
- Genereate Cython code for instance checks, matching an object to
+ Generate Cython code for instance checks, matching an object to
specialized types.
"""
for specialized_type in normal_types:
@@ -374,7 +390,7 @@
coerce_from_py_func=memslice_type.from_py_function,
dtype=dtype)
decl_code.putln(
- "{{memviewslice_cname}} {{coerce_from_py_func}}(object)")
+ "{{memviewslice_cname}} {{coerce_from_py_func}}(object, int)")
pyx_code.context.update(
specialized_type_name=specialized_type.specialization_string,
@@ -384,7 +400,7 @@
u"""
# try {{dtype}}
if itemsize == -1 or itemsize == {{sizeof_dtype}}:
- memslice = {{coerce_from_py_func}}(arg)
+ memslice = {{coerce_from_py_func}}(arg, 0)
if memslice.memview:
__PYX_XDEC_MEMVIEW(&memslice, 1)
# print 'found a match for the buffer through format parsing'
@@ -405,10 +421,11 @@
# The first thing to find a match in this loop breaks out of the loop
pyx_code.put_chunk(
u"""
+ """ + (u"arg_is_pythran_compatible = False" if pythran_types else u"") + u"""
if ndarray is not None:
if isinstance(arg, ndarray):
dtype = arg.dtype
- arg_is_pythran_compatible = True
+ """ + (u"arg_is_pythran_compatible = True" if pythran_types else u"") + u"""
elif __pyx_memoryview_check(arg):
arg_base = arg.base
if isinstance(arg_base, ndarray):
@@ -422,24 +439,30 @@
if dtype is not None:
itemsize = dtype.itemsize
kind = ord(dtype.kind)
- # We only support the endianess of the current compiler
+ dtype_signed = kind == 'i'
+ """)
+ pyx_code.indent(2)
+ if pythran_types:
+ pyx_code.put_chunk(
+ u"""
+ # Pythran only supports the endianness of the current compiler
byteorder = dtype.byteorder
if byteorder == "<" and not __Pyx_Is_Little_Endian():
arg_is_pythran_compatible = False
- if byteorder == ">" and __Pyx_Is_Little_Endian():
+ elif byteorder == ">" and __Pyx_Is_Little_Endian():
arg_is_pythran_compatible = False
- dtype_signed = kind == 'i'
if arg_is_pythran_compatible:
cur_stride = itemsize
- for dim,stride in zip(reversed(arg.shape),reversed(arg.strides)):
- if stride != cur_stride:
+ shape = arg.shape
+ strides = arg.strides
+ for i in range(arg.ndim-1, -1, -1):
+ if (strides[i]) != cur_stride:
arg_is_pythran_compatible = False
break
- cur_stride *= dim
+ cur_stride *= shape[i]
else:
- arg_is_pythran_compatible = not (arg.flags.f_contiguous and arg.ndim > 1)
- """)
- pyx_code.indent(2)
+ arg_is_pythran_compatible = not (arg.flags.f_contiguous and (arg.ndim) > 1)
+ """)
pyx_code.named_insertion_point("numpy_dtype_checks")
self._buffer_check_numpy_dtype(pyx_code, buffer_types, pythran_types)
pyx_code.dedent(2)
@@ -448,7 +471,7 @@
self._buffer_parse_format_string_check(
pyx_code, decl_code, specialized_type, env)
- def _buffer_declarations(self, pyx_code, decl_code, all_buffer_types):
+ def _buffer_declarations(self, pyx_code, decl_code, all_buffer_types, pythran_types):
"""
If we have any buffer specializations, write out some variable
declarations and imports.
@@ -468,10 +491,14 @@
cdef Py_ssize_t itemsize
cdef bint dtype_signed
cdef char kind
- cdef bint arg_is_pythran_compatible
itemsize = -1
- arg_is_pythran_compatible = False
+ """)
+
+ if pythran_types:
+ pyx_code.local_variable_declarations.put_chunk(u"""
+ cdef bint arg_is_pythran_compatible
+ cdef Py_ssize_t cur_stride
""")
pyx_code.imports.put_chunk(
@@ -480,25 +507,27 @@
ndarray = __Pyx_ImportNumPyArrayTypeIfAvailable()
""")
+ seen_typedefs = set()
seen_int_dtypes = set()
for buffer_type in all_buffer_types:
dtype = buffer_type.dtype
+ dtype_name = self._dtype_name(dtype)
if dtype.is_typedef:
- #decl_code.putln("ctypedef %s %s" % (dtype.resolve(),
- # self._dtype_name(dtype)))
- decl_code.putln('ctypedef %s %s "%s"' % (dtype.resolve(),
- self._dtype_name(dtype),
- dtype.empty_declaration_code()))
+ if dtype_name not in seen_typedefs:
+ seen_typedefs.add(dtype_name)
+ decl_code.putln(
+ 'ctypedef %s %s "%s"' % (dtype.resolve(), dtype_name,
+ dtype.empty_declaration_code()))
if buffer_type.dtype.is_int:
if str(dtype) not in seen_int_dtypes:
seen_int_dtypes.add(str(dtype))
- pyx_code.context.update(dtype_name=self._dtype_name(dtype),
+ pyx_code.context.update(dtype_name=dtype_name,
dtype_type=self._dtype_type(dtype))
pyx_code.local_variable_declarations.put_chunk(
u"""
cdef bint {{dtype_name}}_is_signed
- {{dtype_name}}_is_signed = <{{dtype_type}}> -1 < 0
+ {{dtype_name}}_is_signed = not (<{{dtype_type}}> -1 > 0)
""")
def _split_fused_types(self, arg):
@@ -654,7 +683,7 @@
default_idx += 1
if all_buffer_types:
- self._buffer_declarations(pyx_code, decl_code, all_buffer_types)
+ self._buffer_declarations(pyx_code, decl_code, all_buffer_types, pythran_types)
env.use_utility_code(Code.UtilityCode.load_cached("Import", "ImportExport.c"))
env.use_utility_code(Code.UtilityCode.load_cached("ImportNumPyArray", "ImportExport.c"))
diff -Nru cython-0.26.1/Cython/Compiler/Future.py cython-0.29.14/Cython/Compiler/Future.py
--- cython-0.26.1/Cython/Compiler/Future.py 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Future.py 2018-11-24 09:20:06.000000000 +0000
@@ -4,7 +4,7 @@
return getattr(__future__, name, object())
unicode_literals = _get_feature("unicode_literals")
-with_statement = _get_feature("with_statement")
+with_statement = _get_feature("with_statement") # dummy
division = _get_feature("division")
print_function = _get_feature("print_function")
absolute_import = _get_feature("absolute_import")
diff -Nru cython-0.26.1/Cython/Compiler/Lexicon.py cython-0.29.14/Cython/Compiler/Lexicon.py
--- cython-0.26.1/Cython/Compiler/Lexicon.py 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Lexicon.py 2018-11-24 09:20:06.000000000 +0000
@@ -3,7 +3,7 @@
# Cython Scanner - Lexical Definitions
#
-from __future__ import absolute_import
+from __future__ import absolute_import, unicode_literals
raw_prefixes = "rR"
bytes_prefixes = "bB"
diff -Nru cython-0.26.1/Cython/Compiler/Main.py cython-0.29.14/Cython/Compiler/Main.py
--- cython-0.26.1/Cython/Compiler/Main.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Main.py 2019-06-30 06:50:51.000000000 +0000
@@ -9,8 +9,8 @@
import sys
import io
-if sys.version_info[:2] < (2, 6) or (3, 0) <= sys.version_info[:2] < (3, 2):
- sys.stderr.write("Sorry, Cython requires Python 2.6+ or 3.2+, found %d.%d\n" % tuple(sys.version_info[:2]))
+if sys.version_info[:2] < (2, 6) or (3, 0) <= sys.version_info[:2] < (3, 3):
+ sys.stderr.write("Sorry, Cython requires Python 2.6+ or 3.3+, found %d.%d\n" % tuple(sys.version_info[:2]))
sys.exit(1)
try:
@@ -18,12 +18,12 @@
except ImportError:
basestring = str
-from . import Errors
# Do not import Parsing here, import it when needed, because Parsing imports
# Nodes, which globally needs debug command line options initialized to set a
# conditional metaclass. These options are processed by CmdLine called from
# main() in this file.
# import Parsing
+from . import Errors
from .StringEncoding import EncodedString
from .Scanning import PyrexScanner, FileSourceDescriptor
from .Errors import PyrexError, CompileError, error, warning
@@ -38,6 +38,9 @@
verbose = 0
+standard_include_path = os.path.abspath(os.path.join(os.path.dirname(__file__),
+ os.path.pardir, 'Includes'))
+
class CompilationData(object):
# Bundles the information that is passed from transform to transform.
# (For now, this is only)
@@ -52,6 +55,7 @@
# result CompilationResult
pass
+
class Context(object):
# This class encapsulates the context needed for compiling
# one or more Cython implementation files along with their
@@ -65,9 +69,10 @@
# language_level int currently 2 or 3 for Python 2/3
cython_scope = None
+ language_level = None # warn when not set but default to Py2
def __init__(self, include_directories, compiler_directives, cpp=False,
- language_level=2, options=None, create_testscope=True):
+ language_level=None, options=None):
# cython_scope is a hack, set to False by subclasses, in order to break
# an infinite loop.
# Better code organization would fix it.
@@ -85,19 +90,25 @@
self.pxds = {} # full name -> node tree
self._interned = {} # (type(value), value, *key_args) -> interned_value
- standard_include_path = os.path.abspath(os.path.normpath(
- os.path.join(os.path.dirname(__file__), os.path.pardir, 'Includes')))
- self.include_directories = include_directories + [standard_include_path]
-
- self.set_language_level(language_level)
+ if language_level is not None:
+ self.set_language_level(language_level)
self.gdb_debug_outputwriter = None
def set_language_level(self, level):
+ from .Future import print_function, unicode_literals, absolute_import, division
+ future_directives = set()
+ if level == '3str':
+ level = 3
+ else:
+ level = int(level)
+ if level >= 3:
+ future_directives.add(unicode_literals)
+ if level >= 3:
+ future_directives.update([print_function, absolute_import, division])
self.language_level = level
+ self.future_directives = future_directives
if level >= 3:
- from .Future import print_function, unicode_literals, absolute_import, division
- self.future_directives.update([print_function, unicode_literals, absolute_import, division])
self.modules['builtins'] = self.modules['__builtin__']
def intern_ustring(self, value, encoding=None):
@@ -239,7 +250,7 @@
pxd = self.search_include_directories(qualified_name, ".pxd", pos, sys_path=sys_path)
if pxd is None: # XXX Keep this until Includes/Deprecated is removed
if (qualified_name.startswith('python') or
- qualified_name in ('stdlib', 'stdio', 'stl')):
+ qualified_name in ('stdlib', 'stdio', 'stl')):
standard_include_path = os.path.abspath(os.path.normpath(
os.path.join(os.path.dirname(__file__), os.path.pardir, 'Includes')))
deprecated_include_path = os.path.join(standard_include_path, 'Deprecated')
@@ -276,8 +287,13 @@
def search_include_directories(self, qualified_name, suffix, pos,
include=False, sys_path=False):
- return Utils.search_include_directories(
- tuple(self.include_directories), qualified_name, suffix, pos, include, sys_path)
+ include_dirs = self.include_directories
+ if sys_path:
+ include_dirs = include_dirs + sys.path
+ # include_dirs must be hashable for caching in @cached_function
+ include_dirs = tuple(include_dirs + [standard_include_path])
+ return search_include_directories(include_dirs, qualified_name,
+ suffix, pos, include)
def find_root_package_dir(self, file_path):
return Utils.find_root_package_dir(file_path)
@@ -356,7 +372,7 @@
from ..Parser import ConcreteSyntaxTree
except ImportError:
raise RuntimeError(
- "Formal grammer can only be used with compiled Cython with an available pgen.")
+ "Formal grammar can only be used with compiled Cython with an available pgen.")
ConcreteSyntaxTree.p_module(source_filename)
except UnicodeDecodeError as e:
#import traceback
@@ -426,6 +442,7 @@
pass
result.c_file = None
+
def get_output_filename(source_filename, cwd, options):
if options.cplus:
c_suffix = ".cpp"
@@ -441,6 +458,7 @@
else:
return suggested_file_name
+
def create_default_resultobj(compilation_source, options):
result = CompilationResult()
result.main_source_file = compilation_source.source_desc.filename
@@ -451,6 +469,7 @@
result.embedded_metadata = options.embedded_metadata
return result
+
def run_pipeline(source, options, full_module_name=None, context=None):
from . import Pipeline
@@ -464,6 +483,8 @@
abs_path = os.path.abspath(source)
full_module_name = full_module_name or context.extract_module_name(source, options)
+ Utils.raise_error_if_module_name_forbidden(full_module_name)
+
if options.relative_path_in_code_position_comments:
rel_path = full_module_name.replace('.', os.sep) + source_ext
if not abs_path.endswith(rel_path):
@@ -496,15 +517,15 @@
return result
-#------------------------------------------------------------------------
+# ------------------------------------------------------------------------
#
# Main Python entry points
#
-#------------------------------------------------------------------------
+# ------------------------------------------------------------------------
class CompilationSource(object):
"""
- Contains the data necesarry to start up a compilation pipeline for
+ Contains the data necessary to start up a compilation pipeline for
a single compilation unit.
"""
def __init__(self, source_desc, full_module_name, cwd):
@@ -512,30 +533,13 @@
self.full_module_name = full_module_name
self.cwd = cwd
-class CompilationOptions(object):
- """
- Options to the Cython compiler:
-
- show_version boolean Display version number
- use_listing_file boolean Generate a .lis file
- errors_to_stderr boolean Echo errors to stderr when using .lis
- include_path [string] Directories to search for include files
- output_file string Name of generated .c file
- generate_pxi boolean Generate .pxi file for public declarations
- capi_reexport_cincludes
- boolean Add cincluded headers to any auto-generated
- header files.
- timestamps boolean Only compile changed source files.
- verbose boolean Always print source names being compiled
- compiler_directives dict Overrides for pragma options (see Options.py)
- embedded_metadata dict Metadata to embed in the C file as json.
- evaluate_tree_assertions boolean Test support: evaluate parse tree assertions
- language_level integer The Python language level: 2 or 3
- formal_grammar boolean Parse the file with the formal grammar
- cplus boolean Compile as c++ code
+class CompilationOptions(object):
+ r"""
+ See default_options at the end of this module for a list of all possible
+ options and CmdLine.usage and CmdLine.parse_command_line() for their
+ meaning.
"""
-
def __init__(self, defaults=None, **kw):
self.include_path = []
if defaults:
@@ -558,9 +562,10 @@
', '.join(unknown_options))
raise ValueError(message)
+ directive_defaults = Options.get_directive_defaults()
directives = dict(options['compiler_directives']) # copy mutable field
# check for invalid directives
- unknown_directives = set(directives) - set(Options.get_directive_defaults())
+ unknown_directives = set(directives) - set(directive_defaults)
if unknown_directives:
message = "got unknown compiler directive%s: %s" % (
's' if len(unknown_directives) > 1 else '',
@@ -572,11 +577,13 @@
warnings.warn("C++ mode forced when in Pythran mode!")
options['cplus'] = True
if 'language_level' in directives and 'language_level' not in kw:
- options['language_level'] = int(directives['language_level'])
+ options['language_level'] = directives['language_level']
+ elif not options.get('language_level'):
+ options['language_level'] = directive_defaults.get('language_level')
if 'formal_grammar' in directives and 'formal_grammar' not in kw:
options['formal_grammar'] = directives['formal_grammar']
if options['cache'] is True:
- options['cache'] = os.path.expanduser("~/.cycache")
+ options['cache'] = os.path.join(Utils.get_cython_cache_dir(), 'compiler')
self.__dict__.update(options)
@@ -589,6 +596,83 @@
return Context(self.include_path, self.compiler_directives,
self.cplus, self.language_level, options=self)
+ def get_fingerprint(self):
+ r"""
+ Return a string that contains all the options that are relevant for cache invalidation.
+ """
+ # Collect only the data that can affect the generated file(s).
+ data = {}
+
+ for key, value in self.__dict__.items():
+ if key in ['show_version', 'errors_to_stderr', 'verbose', 'quiet']:
+ # verbosity flags have no influence on the compilation result
+ continue
+ elif key in ['output_file', 'output_dir']:
+ # ignore the exact name of the output file
+ continue
+ elif key in ['timestamps']:
+ # the cache cares about the content of files, not about the timestamps of sources
+ continue
+ elif key in ['cache']:
+ # hopefully caching has no influence on the compilation result
+ continue
+ elif key in ['compiler_directives']:
+ # directives passed on to the C compiler do not influence the generated C code
+ continue
+ elif key in ['include_path']:
+ # this path changes which headers are tracked as dependencies,
+ # it has no influence on the generated C code
+ continue
+ elif key in ['working_path']:
+ # this path changes where modules and pxd files are found;
+ # their content is part of the fingerprint anyway, their
+ # absolute path does not matter
+ continue
+ elif key in ['create_extension']:
+ # create_extension() has already mangled the options, e.g.,
+ # embedded_metadata, when the fingerprint is computed so we
+ # ignore it here.
+ continue
+ elif key in ['build_dir']:
+ # the (temporary) directory where we collect dependencies
+ # has no influence on the C output
+ continue
+ elif key in ['use_listing_file', 'generate_pxi', 'annotate', 'annotate_coverage_xml']:
+ # all output files are contained in the cache so the types of
+ # files generated must be part of the fingerprint
+ data[key] = value
+ elif key in ['formal_grammar', 'evaluate_tree_assertions']:
+ # these bits can change whether compilation to C passes/fails
+ data[key] = value
+ elif key in ['embedded_metadata', 'emit_linenums', 'c_line_in_traceback', 'gdb_debug', 'relative_path_in_code_position_comments']:
+ # the generated code contains additional bits when these are set
+ data[key] = value
+ elif key in ['cplus', 'language_level', 'compile_time_env', 'np_pythran']:
+ # assorted bits that, e.g., influence the parser
+ data[key] = value
+ elif key == ['capi_reexport_cincludes']:
+ if self.capi_reexport_cincludes:
+ # our caching implementation does not yet include fingerprints of all the header files
+ raise NotImplementedError('capi_reexport_cincludes is not compatible with Cython caching')
+ elif key == ['common_utility_include_dir']:
+ if self.common_utility_include_dir:
+ raise NotImplementedError('common_utility_include_dir is not compatible with Cython caching yet')
+ else:
+ # any unexpected option should go into the fingerprint; it's better
+ # to recompile than to return incorrect results from the cache.
+ data[key] = value
+
+ def to_fingerprint(item):
+ r"""
+ Recursively turn item into a string, turning dicts into lists with
+ deterministic ordering.
+ """
+ if isinstance(item, dict):
+ item = sorted([(repr(key), to_fingerprint(value)) for key, value in item.items()])
+ return repr(item)
+
+ return to_fingerprint(data)
+
class CompilationResult(object):
"""
@@ -678,13 +762,14 @@
processed.add(source)
return results
+
def compile(source, options = None, full_module_name = None, **kwds):
"""
compile(source [, options], [, = ]...)
Compile one or more Pyrex implementation files, with optional timestamp
- checking and recursing on dependecies. The source argument may be a string
- or a sequence of strings If it is a string and no recursion or timestamp
+ checking and recursing on dependencies. The source argument may be a string
+ or a sequence of strings. If it is a string and no recursion or timestamp
checking is requested, a CompilationResult is returned, otherwise a
CompilationResultSet is returned.
"""
@@ -694,14 +779,67 @@
else:
return compile_multiple(source, options)
-#------------------------------------------------------------------------
+
+@Utils.cached_function
+def search_include_directories(dirs, qualified_name, suffix, pos, include=False):
+ """
+ Search the list of include directories for the given file name.
+
+ If a source file position is given, first searches the directory
+ containing that file. Returns None if not found, but does not
+ report an error.
+
+ The 'include' option will disable package dereferencing.
+ """
+
+ if pos:
+ file_desc = pos[0]
+ if not isinstance(file_desc, FileSourceDescriptor):
+ raise RuntimeError("Only file sources for code supported")
+ if include:
+ dirs = (os.path.dirname(file_desc.filename),) + dirs
+ else:
+ dirs = (Utils.find_root_package_dir(file_desc.filename),) + dirs
+
+ dotted_filename = qualified_name
+ if suffix:
+ dotted_filename += suffix
+
+ if not include:
+ names = qualified_name.split('.')
+ package_names = tuple(names[:-1])
+ module_name = names[-1]
+ module_filename = module_name + suffix
+ package_filename = "__init__" + suffix
+
+ for dirname in dirs:
+ path = os.path.join(dirname, dotted_filename)
+ if os.path.exists(path):
+ return path
+
+ if not include:
+ package_dir = Utils.check_package_dir(dirname, package_names)
+ if package_dir is not None:
+ path = os.path.join(package_dir, module_filename)
+ if os.path.exists(path):
+ return path
+ path = os.path.join(package_dir, module_name,
+ package_filename)
+ if os.path.exists(path):
+ return path
+ return None
+
+
+# ------------------------------------------------------------------------
#
# Main command-line entry point
#
-#------------------------------------------------------------------------
+# ------------------------------------------------------------------------
+
def setuptools_main():
return main(command_line = 1)
+
def main(command_line = 0):
args = sys.argv[1:]
any_failures = 0
@@ -727,12 +865,11 @@
sys.exit(1)
-
-#------------------------------------------------------------------------
+# ------------------------------------------------------------------------
#
# Set the default options depending on the platform
#
-#------------------------------------------------------------------------
+# ------------------------------------------------------------------------
default_options = dict(
show_version = 0,
@@ -754,7 +891,7 @@
emit_linenums = False,
relative_path_in_code_position_comments = True,
c_line_in_traceback = True,
- language_level = 2,
+ language_level = None, # warn but default to 2
formal_grammar = False,
gdb_debug = False,
compile_time_env = None,
diff -Nru cython-0.26.1/Cython/Compiler/MemoryView.py cython-0.29.14/Cython/Compiler/MemoryView.py
--- cython-0.26.1/Cython/Compiler/MemoryView.py 2016-12-10 15:41:07.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/MemoryView.py 2018-09-22 14:18:56.000000000 +0000
@@ -28,12 +28,12 @@
format_flag = "PyBUF_FORMAT"
-memview_c_contiguous = "(PyBUF_C_CONTIGUOUS | PyBUF_FORMAT | PyBUF_WRITABLE)"
-memview_f_contiguous = "(PyBUF_F_CONTIGUOUS | PyBUF_FORMAT | PyBUF_WRITABLE)"
-memview_any_contiguous = "(PyBUF_ANY_CONTIGUOUS | PyBUF_FORMAT | PyBUF_WRITABLE)"
-memview_full_access = "PyBUF_FULL"
-#memview_strided_access = "PyBUF_STRIDED"
-memview_strided_access = "PyBUF_RECORDS"
+memview_c_contiguous = "(PyBUF_C_CONTIGUOUS | PyBUF_FORMAT)"
+memview_f_contiguous = "(PyBUF_F_CONTIGUOUS | PyBUF_FORMAT)"
+memview_any_contiguous = "(PyBUF_ANY_CONTIGUOUS | PyBUF_FORMAT)"
+memview_full_access = "PyBUF_FULL_RO"
+#memview_strided_access = "PyBUF_STRIDED_RO"
+memview_strided_access = "PyBUF_RECORDS_RO"
MEMVIEW_DIRECT = '__Pyx_MEMVIEW_DIRECT'
MEMVIEW_PTR = '__Pyx_MEMVIEW_PTR'
@@ -390,19 +390,15 @@
return 'contiguous'
-def get_is_contig_func_name(c_or_f, ndim):
- return "__pyx_memviewslice_is_%s_contig%d" % (c_or_f, ndim)
+def get_is_contig_func_name(contig_type, ndim):
+ assert contig_type in ('C', 'F')
+ return "__pyx_memviewslice_is_contig_%s%d" % (contig_type, ndim)
-def get_is_contig_utility(c_contig, ndim):
- C = dict(context, ndim=ndim)
- if c_contig:
- utility = load_memview_c_utility("MemviewSliceIsCContig", C,
- requires=[is_contig_utility])
- else:
- utility = load_memview_c_utility("MemviewSliceIsFContig", C,
- requires=[is_contig_utility])
-
+def get_is_contig_utility(contig_type, ndim):
+ assert contig_type in ('C', 'F')
+ C = dict(context, ndim=ndim, contig_type=contig_type)
+ utility = load_memview_c_utility("MemviewSliceCheckContig", C, requires=[is_contig_utility])
return utility
@@ -488,18 +484,23 @@
return "__pyx_memoryview_copy_slice_%s_%s" % (
memview.specialization_suffix(), c_or_f)
+
def get_copy_new_utility(pos, from_memview, to_memview):
- if from_memview.dtype != to_memview.dtype:
- return error(pos, "dtypes must be the same!")
+ if (from_memview.dtype != to_memview.dtype and
+ not (from_memview.dtype.is_const and from_memview.dtype.const_base_type == to_memview.dtype)):
+ error(pos, "dtypes must be the same!")
+ return
if len(from_memview.axes) != len(to_memview.axes):
- return error(pos, "number of dimensions must be same")
+ error(pos, "number of dimensions must be same")
+ return
if not (to_memview.is_c_contig or to_memview.is_f_contig):
- return error(pos, "to_memview must be c or f contiguous.")
+ error(pos, "to_memview must be c or f contiguous.")
+ return
for (access, packing) in from_memview.axes:
if access != 'direct':
- return error(
- pos, "cannot handle 'full' or 'ptr' access at this time.")
+ error(pos, "cannot handle 'full' or 'ptr' access at this time.")
+ return
if to_memview.is_c_contig:
mode = 'c'
@@ -520,6 +521,7 @@
dtype_is_object=int(to_memview.dtype.is_pyobject)),
requires=[copy_contents_new_utility])
+
def get_axes_specs(env, axes):
'''
get_axes_specs(env, axes) -> list of (access, packing) specs for each axis.
@@ -809,18 +811,15 @@
}
memviewslice_declare_code = load_memview_c_utility(
"MemviewSliceStruct",
- proto_block='utility_code_proto_before_types',
context=context,
requires=[])
-atomic_utility = load_memview_c_utility("Atomics", context,
- proto_block='utility_code_proto_before_types')
+atomic_utility = load_memview_c_utility("Atomics", context)
memviewslice_init_code = load_memview_c_utility(
"MemviewSliceInit",
context=dict(context, BUF_MAX_NDIMS=Options.buffer_max_dims),
requires=[memviewslice_declare_code,
- Buffer.acquire_utility_code,
atomic_utility],
)
@@ -842,7 +841,7 @@
context=context,
requires=[Buffer.GetAndReleaseBufferUtilityCode(),
Buffer.buffer_struct_declare_code,
- Buffer.empty_bufstruct_utility,
+ Buffer.buffer_formats_declare_code,
memviewslice_init_code,
is_contig_utility,
overlapping_utility,
diff -Nru cython-0.26.1/Cython/Compiler/ModuleNode.py cython-0.29.14/Cython/Compiler/ModuleNode.py
--- cython-0.26.1/Cython/Compiler/ModuleNode.py 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/ModuleNode.py 2019-11-01 14:13:39.000000000 +0000
@@ -7,14 +7,16 @@
import cython
cython.declare(Naming=object, Options=object, PyrexTypes=object, TypeSlots=object,
error=object, warning=object, py_object_type=object, UtilityCode=object,
- EncodedString=object)
+ EncodedString=object, re=object)
+from collections import defaultdict
import json
-import os
import operator
+import os
+import re
+
from .PyrexTypes import CPtrType
from . import Future
-
from . import Annotate
from . import Code
from . import Naming
@@ -26,8 +28,8 @@
from .Errors import error, warning
from .PyrexTypes import py_object_type
-from ..Utils import open_new_file, replace_suffix, decode_filename
-from .Code import UtilityCode
+from ..Utils import open_new_file, replace_suffix, decode_filename, build_hex_version
+from .Code import UtilityCode, IncludeCode
from .StringEncoding import EncodedString
from .Pythran import has_np_pythran
@@ -85,15 +87,15 @@
self.scope.utility_code_list.extend(scope.utility_code_list)
+ for inc in scope.c_includes.values():
+ self.scope.process_include(inc)
+
def extend_if_not_in(L1, L2):
for x in L2:
if x not in L1:
L1.append(x)
- extend_if_not_in(self.scope.include_files, scope.include_files)
extend_if_not_in(self.scope.included_files, scope.included_files)
- extend_if_not_in(self.scope.python_include_files,
- scope.python_include_files)
if merge_scope:
# Ensure that we don't generate import code for these entries!
@@ -112,7 +114,7 @@
env.doc = self.doc = None
elif Options.embed_pos_in_docstring:
env.doc = EncodedString(u'File: %s (starting at line %s)' % Nodes.relative_position(self.pos))
- if not self.doc is None:
+ if self.doc is not None:
env.doc = EncodedString(env.doc + u'\n' + self.doc)
env.doc.encoding = self.doc.encoding
else:
@@ -174,6 +176,7 @@
h_guard = Naming.h_guard_prefix + self.api_name(env)
h_code.put_h_guard(h_guard)
h_code.putln("")
+ h_code.putln('#include "Python.h"')
self.generate_type_header_code(h_types, h_code)
if options.capi_reexport_cincludes:
self.generate_includes(env, [], h_code)
@@ -201,10 +204,13 @@
h_code.putln("")
h_code.putln("#endif /* !%s */" % api_guard)
h_code.putln("")
+ h_code.putln("/* WARNING: the interface of the module init function changed in CPython 3.5. */")
+ h_code.putln("/* It now returns a PyModuleDef instance instead of a PyModule instance. */")
+ h_code.putln("")
h_code.putln("#if PY_MAJOR_VERSION < 3")
h_code.putln("PyMODINIT_FUNC init%s(void);" % env.module_name)
h_code.putln("#else")
- h_code.putln("PyMODINIT_FUNC PyInit_%s(void);" % env.module_name)
+ h_code.putln("PyMODINIT_FUNC %s(void);" % self.mod_init_func_cname('PyInit', env))
h_code.putln("#endif")
h_code.putln("")
h_code.putln("#endif /* !%s */" % h_guard)
@@ -241,6 +247,11 @@
h_code.put_generated_by()
api_guard = Naming.api_guard_prefix + self.api_name(env)
h_code.put_h_guard(api_guard)
+ # Work around https://bugs.python.org/issue4709
+ h_code.putln('#ifdef __MINGW64__')
+ h_code.putln('#define MS_WIN64')
+ h_code.putln('#endif')
+
h_code.putln('#include "Python.h"')
if result.h_file:
h_code.putln('#include "%s"' % os.path.basename(result.h_file))
@@ -266,17 +277,17 @@
h_code.putln("static %s = 0;" % type.declaration_code(cname))
h_code.putln("#define %s (*%s)" % (entry.name, cname))
h_code.put(UtilityCode.load_as_string("PyIdentifierFromString", "ImportExport.c")[0])
- h_code.put(UtilityCode.load_as_string("ModuleImport", "ImportExport.c")[1])
if api_vars:
h_code.put(UtilityCode.load_as_string("VoidPtrImport", "ImportExport.c")[1])
if api_funcs:
h_code.put(UtilityCode.load_as_string("FunctionImport", "ImportExport.c")[1])
if api_extension_types:
+ h_code.put(UtilityCode.load_as_string("TypeImport", "ImportExport.c")[0])
h_code.put(UtilityCode.load_as_string("TypeImport", "ImportExport.c")[1])
h_code.putln("")
h_code.putln("static int import_%s(void) {" % self.api_name(env))
h_code.putln("PyObject *module = 0;")
- h_code.putln('module = __Pyx_ImportModule("%s");' % env.qualified_name)
+ h_code.putln('module = PyImport_ImportModule("%s");' % env.qualified_name)
h_code.putln("if (!module) goto bad;")
for entry in api_funcs:
cname = env.mangle(Naming.func_prefix_api, entry.name)
@@ -290,11 +301,10 @@
h_code.putln(
'if (__Pyx_ImportVoidPtr(module, "%s", (void **)&%s, "%s") < 0) goto bad;'
% (entry.name, cname, sig))
+ with ModuleImportGenerator(h_code, imported_modules={env.qualified_name: 'module'}) as import_generator:
+ for entry in api_extension_types:
+ self.generate_type_import_call(entry.type, h_code, import_generator, error_code="goto bad;")
h_code.putln("Py_DECREF(module); module = 0;")
- for entry in api_extension_types:
- self.generate_type_import_call(
- entry.type, h_code,
- "if (!%s) goto bad;" % entry.type.typeptr_cname)
h_code.putln("return 0;")
h_code.putln("bad:")
h_code.putln("Py_XDECREF(module);")
@@ -355,17 +365,25 @@
code = globalstate['before_global_var']
code.putln('#define __Pyx_MODULE_NAME "%s"' % self.full_module_name)
- code.putln("int %s%s = 0;" % (Naming.module_is_main, self.full_module_name.replace('.', '__')))
+ module_is_main = "%s%s" % (Naming.module_is_main, self.full_module_name.replace('.', '__'))
+ code.putln("extern int %s;" % module_is_main)
+ code.putln("int %s = 0;" % module_is_main)
code.putln("")
code.putln("/* Implementation of '%s' */" % env.qualified_name)
+ code = globalstate['late_includes']
+ code.putln("/* Late includes */")
+ self.generate_includes(env, modules, code, early=False)
+
code = globalstate['all_the_rest']
self.generate_cached_builtins_decls(env, code)
self.generate_lambda_definitions(env, code)
# generate normal variable and function definitions
self.generate_variable_definitions(env, code)
+
self.body.generate_function_definitions(env, code)
+
code.mark_pos(None)
self.generate_typeobj_definitions(env, code)
self.generate_method_table(env, code)
@@ -373,6 +391,9 @@
self.generate_import_star(env, code)
self.generate_pymoduledef_struct(env, code)
+ # initialise the macro to reduce the code size of one-time functionality
+ code.putln(UtilityCode.load_as_string("SmallCodeConfig", "ModuleSetupCode.c")[0].strip())
+
# init_globals is inserted before this
self.generate_module_init_func(modules[:-1], env, globalstate['init_module'])
self.generate_module_cleanup_func(env, globalstate['cleanup_module'])
@@ -443,19 +464,18 @@
tb = env.context.gdb_debug_outputwriter
markers = ccodewriter.buffer.allmarkers()
- d = {}
+ d = defaultdict(list)
for c_lineno, cython_lineno in enumerate(markers):
if cython_lineno > 0:
- d.setdefault(cython_lineno, []).append(c_lineno + 1)
+ d[cython_lineno].append(c_lineno + 1)
tb.start('LineNumberMapping')
for cython_lineno, c_linenos in sorted(d.items()):
- attrs = {
- 'c_linenos': ' '.join(map(str, c_linenos)),
- 'cython_lineno': str(cython_lineno),
- }
- tb.start('LineNumber', attrs)
- tb.end('LineNumber')
+ tb.add_entry(
+ 'LineNumber',
+ c_linenos=' '.join(map(str, c_linenos)),
+ cython_lineno=str(cython_lineno),
+ )
tb.end('LineNumberMapping')
tb.serialize()
@@ -610,25 +630,30 @@
code.putln("")
code.putln("#define PY_SSIZE_T_CLEAN")
- for filename in env.python_include_files:
- code.putln('#include "%s"' % filename)
+ for inc in sorted(env.c_includes.values(), key=IncludeCode.sortkey):
+ if inc.location == inc.INITIAL:
+ inc.write(code)
code.putln("#ifndef Py_PYTHON_H")
code.putln(" #error Python headers needed to compile C extensions, "
"please install development version of Python.")
code.putln("#elif PY_VERSION_HEX < 0x02060000 || "
- "(0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03020000)")
- code.putln(" #error Cython requires Python 2.6+ or Python 3.2+.")
+ "(0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000)")
+ code.putln(" #error Cython requires Python 2.6+ or Python 3.3+.")
code.putln("#else")
code.globalstate["end"].putln("#endif /* Py_PYTHON_H */")
from .. import __version__
code.putln('#define CYTHON_ABI "%s"' % __version__.replace('.', '_'))
+ code.putln('#define CYTHON_HEX_VERSION %s' % build_hex_version(__version__))
+ code.putln("#define CYTHON_FUTURE_DIVISION %d" % (
+ Future.division in env.context.future_directives))
self._put_setup_code(code, "CModulePreamble")
if env.context.options.cplus:
self._put_setup_code(code, "CppInitCode")
else:
self._put_setup_code(code, "CInitCode")
+ self._put_setup_code(code, "PythonCompatibility")
self._put_setup_code(code, "MathInitCode")
if options.c_line_in_traceback:
@@ -642,29 +667,16 @@
}
""" % (Naming.filename_cname, Naming.filetable_cname, Naming.lineno_cname, cinfo))
- code.put("""
-#if PY_MAJOR_VERSION >= 3
- #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y)
- #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y)
-#else
-""")
- if Future.division in env.context.future_directives:
- code.putln(" #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y)")
- code.putln(" #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y)")
- else:
- code.putln(" #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y)")
- code.putln(" #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y)")
- code.putln("#endif")
-
code.putln("")
self.generate_extern_c_macro_definition(code)
code.putln("")
code.putln("#define %s" % Naming.h_guard_prefix + self.api_name(env))
code.putln("#define %s" % Naming.api_guard_prefix + self.api_name(env))
- self.generate_includes(env, cimported_modules, code)
+ code.putln("/* Early includes */")
+ self.generate_includes(env, cimported_modules, code, late=False)
code.putln("")
- code.putln("#ifdef PYREX_WITHOUT_ASSERTIONS")
+ code.putln("#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS)")
code.putln("#define CYTHON_WITHOUT_ASSERTIONS")
code.putln("#endif")
code.putln("")
@@ -682,10 +694,13 @@
if c_string_type not in ('bytes', 'bytearray') and not c_string_encoding:
error(self.pos, "a default encoding must be provided if c_string_type is not a byte type")
code.putln('#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII %s' % int(c_string_encoding == 'ascii'))
+ code.putln('#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 %s' %
+ int(c_string_encoding.replace('-', '').lower() == 'utf8'))
if c_string_encoding == 'default':
code.putln('#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT 1')
else:
- code.putln('#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT 0')
+ code.putln('#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT '
+ '(PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8)')
code.putln('#define __PYX_DEFAULT_STRING_ENCODING "%s"' % c_string_encoding)
if c_string_type == 'bytearray':
c_string_func_name = 'ByteArray'
@@ -703,10 +718,10 @@
code.put(Nodes.branch_prediction_macros)
code.putln('static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; }')
code.putln('')
- code.putln('static PyObject *%s;' % env.module_cname)
+ code.putln('static PyObject *%s = NULL;' % env.module_cname)
code.putln('static PyObject *%s;' % env.module_dict_cname)
code.putln('static PyObject *%s;' % Naming.builtins_cname)
- code.putln('static PyObject *%s;' % Naming.cython_runtime_cname)
+ code.putln('static PyObject *%s = NULL;' % Naming.cython_runtime_cname)
code.putln('static PyObject *%s;' % Naming.empty_tuple)
code.putln('static PyObject *%s;' % Naming.empty_bytes)
code.putln('static PyObject *%s;' % Naming.empty_unicode)
@@ -717,6 +732,7 @@
code.putln('static const char * %s= %s;' % (Naming.cfilenm_cname, Naming.file_c_macro))
code.putln('static const char *%s;' % Naming.filename_cname)
+ env.use_utility_code(UtilityCode.load_cached("FastTypeChecks", "ModuleSetupCode.c"))
if has_np_pythran(env):
env.use_utility_code(UtilityCode.load_cached("PythranConversion", "CppSupport.cpp"))
@@ -735,16 +751,17 @@
code.putln(" #define DL_IMPORT(_T) _T")
code.putln("#endif")
- def generate_includes(self, env, cimported_modules, code):
+ def generate_includes(self, env, cimported_modules, code, early=True, late=True):
includes = []
- for filename in env.include_files:
- byte_decoded_filenname = str(filename)
- if byte_decoded_filenname[0] == '<' and byte_decoded_filenname[-1] == '>':
- code.putln('#include %s' % byte_decoded_filenname)
- else:
- code.putln('#include "%s"' % byte_decoded_filenname)
-
- code.putln_openmp("#include ")
+ for inc in sorted(env.c_includes.values(), key=IncludeCode.sortkey):
+ if inc.location == inc.EARLY:
+ if early:
+ inc.write(code)
+ elif inc.location == inc.LATE:
+ if late:
+ inc.write(code)
+ if early:
+ code.putln_openmp("#include ")
def generate_filename_table(self, code):
from os.path import isabs, basename
@@ -893,19 +910,94 @@
[base_class.empty_declaration_code() for base_class in type.base_classes])
code.put(" : public %s" % base_class_decl)
code.putln(" {")
+ py_attrs = [e for e in scope.entries.values()
+ if e.type.is_pyobject and not e.is_inherited]
has_virtual_methods = False
- has_destructor = False
+ constructor = None
+ destructor = None
for attr in scope.var_entries:
if attr.type.is_cfunction and attr.type.is_static_method:
code.put("static ")
- elif attr.type.is_cfunction and attr.name != "":
+ elif attr.name == "":
+ constructor = attr
+ elif attr.name == "":
+ destructor = attr
+ elif attr.type.is_cfunction:
code.put("virtual ")
has_virtual_methods = True
- if attr.cname[0] == '~':
- has_destructor = True
code.putln("%s;" % attr.type.declaration_code(attr.cname))
- if has_virtual_methods and not has_destructor:
- code.putln("virtual ~%s() { }" % type.cname)
+ is_implementing = 'init_module' in code.globalstate.parts
+ if constructor or py_attrs:
+ if constructor:
+ arg_decls = []
+ arg_names = []
+ for arg in constructor.type.original_args[
+ :len(constructor.type.args)-constructor.type.optional_arg_count]:
+ arg_decls.append(arg.declaration_code())
+ arg_names.append(arg.cname)
+ if constructor.type.optional_arg_count:
+ arg_decls.append(constructor.type.op_arg_struct.declaration_code(Naming.optional_args_cname))
+ arg_names.append(Naming.optional_args_cname)
+ if not arg_decls:
+ arg_decls = ["void"]
+ else:
+ arg_decls = ["void"]
+ arg_names = []
+ if is_implementing:
+ code.putln("%s(%s) {" % (type.cname, ", ".join(arg_decls)))
+ if py_attrs:
+ code.put_ensure_gil()
+ for attr in py_attrs:
+ code.put_init_var_to_py_none(attr, nanny=False);
+ if constructor:
+ code.putln("%s(%s);" % (constructor.cname, ", ".join(arg_names)))
+ if py_attrs:
+ code.put_release_ensured_gil()
+ code.putln("}")
+ else:
+ code.putln("%s(%s);" % (type.cname, ", ".join(arg_decls)))
+ if destructor or py_attrs or has_virtual_methods:
+ if has_virtual_methods:
+ code.put("virtual ")
+ if is_implementing:
+ code.putln("~%s() {" % type.cname)
+ if py_attrs:
+ code.put_ensure_gil()
+ if destructor:
+ code.putln("%s();" % destructor.cname)
+ if py_attrs:
+ for attr in py_attrs:
+ code.put_var_xdecref(attr, nanny=False);
+ code.put_release_ensured_gil()
+ code.putln("}")
+ else:
+ code.putln("~%s();" % type.cname)
+ if py_attrs:
+ # Also need copy constructor and assignment operators.
+ if is_implementing:
+ code.putln("%s(const %s& __Pyx_other) {" % (type.cname, type.cname))
+ code.put_ensure_gil()
+ for attr in scope.var_entries:
+ if not attr.type.is_cfunction:
+ code.putln("%s = __Pyx_other.%s;" % (attr.cname, attr.cname))
+ code.put_var_incref(attr, nanny=False)
+ code.put_release_ensured_gil()
+ code.putln("}")
+ code.putln("%s& operator=(const %s& __Pyx_other) {" % (type.cname, type.cname))
+ code.putln("if (this != &__Pyx_other) {")
+ code.put_ensure_gil()
+ for attr in scope.var_entries:
+ if not attr.type.is_cfunction:
+ code.put_var_xdecref(attr, nanny=False);
+ code.putln("%s = __Pyx_other.%s;" % (attr.cname, attr.cname))
+ code.put_var_incref(attr, nanny=False)
+ code.put_release_ensured_gil()
+ code.putln("}")
+ code.putln("return *this;")
+ code.putln("}")
+ else:
+ code.putln("%s(const %s& __Pyx_other);" % (type.cname, type.cname))
+ code.putln("%s& operator=(const %s& __Pyx_other);" % (type.cname, type.cname))
code.putln("};")
def generate_enum_definition(self, entry, code):
@@ -1133,29 +1225,31 @@
self.generate_traverse_function(scope, code, entry)
if scope.needs_tp_clear():
self.generate_clear_function(scope, code, entry)
- if scope.defines_any(["__getitem__"]):
+ if scope.defines_any_special(["__getitem__"]):
self.generate_getitem_int_function(scope, code)
- if scope.defines_any(["__setitem__", "__delitem__"]):
+ if scope.defines_any_special(["__setitem__", "__delitem__"]):
self.generate_ass_subscript_function(scope, code)
- if scope.defines_any(["__getslice__", "__setslice__", "__delslice__"]):
+ if scope.defines_any_special(["__getslice__", "__setslice__", "__delslice__"]):
warning(self.pos,
"__getslice__, __setslice__, and __delslice__ are not supported by Python 3, "
"use __getitem__, __setitem__, and __delitem__ instead", 1)
code.putln("#if PY_MAJOR_VERSION >= 3")
code.putln("#error __getslice__, __setslice__, and __delslice__ not supported in Python 3.")
code.putln("#endif")
- if scope.defines_any(["__setslice__", "__delslice__"]):
+ if scope.defines_any_special(["__setslice__", "__delslice__"]):
self.generate_ass_slice_function(scope, code)
- if scope.defines_any(["__getattr__", "__getattribute__"]):
+ if scope.defines_any_special(["__getattr__", "__getattribute__"]):
self.generate_getattro_function(scope, code)
- if scope.defines_any(["__setattr__", "__delattr__"]):
+ if scope.defines_any_special(["__setattr__", "__delattr__"]):
self.generate_setattro_function(scope, code)
- if scope.defines_any(["__get__"]):
+ if scope.defines_any_special(["__get__"]):
self.generate_descr_get_function(scope, code)
- if scope.defines_any(["__set__", "__delete__"]):
+ if scope.defines_any_special(["__set__", "__delete__"]):
self.generate_descr_set_function(scope, code)
- if scope.defines_any(["__dict__"]):
+ if not scope.is_closure_class_scope and scope.defines_any(["__dict__"]):
self.generate_dict_getter_function(scope, code)
+ if scope.defines_any_special(TypeSlots.richcmp_special_methods):
+ self.generate_richcmp_function(scope, code)
self.generate_property_accessors(scope, code)
self.generate_method_table(scope, code)
self.generate_getset_table(scope, code)
@@ -1351,7 +1445,7 @@
if not is_final_type:
# in Py3.4+, call tp_finalize() as early as possible
- code.putln("#if PY_VERSION_HEX >= 0x030400a1")
+ code.putln("#if CYTHON_USE_TP_FINALIZE")
if needs_gc:
finalised_check = '!_PyGC_FINALIZED(o)'
else:
@@ -1523,7 +1617,7 @@
code.putln("}")
def generate_clear_function(self, scope, code, cclass_entry):
- tp_slot = TypeSlots.GCDependentSlot("tp_clear")
+ tp_slot = TypeSlots.get_slot_by_name("tp_clear")
slot_func = scope.mangle_internal("tp_clear")
base_type = scope.parent_type.base_type
if tp_slot.slot_code(scope) != slot_func:
@@ -1723,6 +1817,76 @@
code.putln(
"}")
+ def generate_richcmp_function(self, scope, code):
+ if scope.lookup_here("__richcmp__"):
+ # user implemented, nothing to do
+ return
+ # otherwise, we have to generate it from the Python special methods
+ richcmp_cfunc = scope.mangle_internal("tp_richcompare")
+ code.putln("")
+ code.putln("static PyObject *%s(PyObject *o1, PyObject *o2, int op) {" % richcmp_cfunc)
+ code.putln("switch (op) {")
+
+ class_scopes = []
+ cls = scope.parent_type
+ while cls is not None and not cls.entry.visibility == 'extern':
+ class_scopes.append(cls.scope)
+ cls = cls.scope.parent_type.base_type
+ assert scope in class_scopes
+
+ extern_parent = None
+ if cls and cls.entry.visibility == 'extern':
+ # need to call up into base classes as we may not know all implemented comparison methods
+ extern_parent = cls if cls.typeptr_cname else scope.parent_type.base_type
+
+ eq_entry = None
+ has_ne = False
+ for cmp_method in TypeSlots.richcmp_special_methods:
+ for class_scope in class_scopes:
+ entry = class_scope.lookup_here(cmp_method)
+ if entry is not None:
+ break
+ else:
+ continue
+
+ cmp_type = cmp_method.strip('_').upper() # e.g. "__eq__" -> EQ
+ code.putln("case Py_%s: {" % cmp_type)
+ if cmp_method == '__eq__':
+ eq_entry = entry
+ # Python itself does not do this optimisation, it seems...
+ #code.putln("if (o1 == o2) return __Pyx_NewRef(Py_True);")
+ elif cmp_method == '__ne__':
+ has_ne = True
+ # Python itself does not do this optimisation, it seems...
+ #code.putln("if (o1 == o2) return __Pyx_NewRef(Py_False);")
+ code.putln("return %s(o1, o2);" % entry.func_cname)
+ code.putln("}")
+
+ if eq_entry and not has_ne and not extern_parent:
+ code.putln("case Py_NE: {")
+ code.putln("PyObject *ret;")
+ # Python itself does not do this optimisation, it seems...
+ #code.putln("if (o1 == o2) return __Pyx_NewRef(Py_False);")
+ code.putln("ret = %s(o1, o2);" % eq_entry.func_cname)
+ code.putln("if (likely(ret && ret != Py_NotImplemented)) {")
+ code.putln("int b = __Pyx_PyObject_IsTrue(ret); Py_DECREF(ret);")
+ code.putln("if (unlikely(b < 0)) return NULL;")
+ code.putln("ret = (b) ? Py_False : Py_True;")
+ code.putln("Py_INCREF(ret);")
+ code.putln("}")
+ code.putln("return ret;")
+ code.putln("}")
+
+ code.putln("default: {")
+ if extern_parent and extern_parent.typeptr_cname:
+ code.putln("if (likely(%s->tp_richcompare)) return %s->tp_richcompare(o1, o2, op);" % (
+ extern_parent.typeptr_cname, extern_parent.typeptr_cname))
+ code.putln("return __Pyx_NewRef(Py_NotImplemented);")
+ code.putln("}")
+
+ code.putln("}") # switch
+ code.putln("}")
+
def generate_getattro_function(self, scope, code):
# First try to get the attribute using __getattribute__, if defined, or
# PyObject_GenericGetAttr.
@@ -1730,16 +1894,19 @@
# If that raises an AttributeError, call the __getattr__ if defined.
#
# In both cases, defined can be in this class, or any base class.
- def lookup_here_or_base(n, type=None):
+ def lookup_here_or_base(n, tp=None, extern_return=None):
# Recursive lookup
- if type is None:
- type = scope.parent_type
- r = type.scope.lookup_here(n)
- if r is None and \
- type.base_type is not None:
- return lookup_here_or_base(n, type.base_type)
- else:
- return r
+ if tp is None:
+ tp = scope.parent_type
+ r = tp.scope.lookup_here(n)
+ if r is None:
+ if tp.is_external and extern_return is not None:
+ return extern_return
+ if tp.base_type is not None:
+ return lookup_here_or_base(n, tp.base_type)
+ return r
+
+ has_instance_dict = lookup_here_or_base("__dict__", extern_return="extern")
getattr_entry = lookup_here_or_base("__getattr__")
getattribute_entry = lookup_here_or_base("__getattribute__")
code.putln("")
@@ -1751,8 +1918,20 @@
"PyObject *v = %s(o, n);" % (
getattribute_entry.func_cname))
else:
+ if not has_instance_dict and scope.parent_type.is_final_type:
+ # Final with no dict => use faster type attribute lookup.
+ code.globalstate.use_utility_code(
+ UtilityCode.load_cached("PyObject_GenericGetAttrNoDict", "ObjectHandling.c"))
+ generic_getattr_cfunc = "__Pyx_PyObject_GenericGetAttrNoDict"
+ elif not has_instance_dict or has_instance_dict == "extern":
+ # No dict in the known ancestors, but don't know about extern ancestors or subtypes.
+ code.globalstate.use_utility_code(
+ UtilityCode.load_cached("PyObject_GenericGetAttr", "ObjectHandling.c"))
+ generic_getattr_cfunc = "__Pyx_PyObject_GenericGetAttr"
+ else:
+ generic_getattr_cfunc = "PyObject_GenericGetAttr"
code.putln(
- "PyObject *v = PyObject_GenericGetAttr(o, n);")
+ "PyObject *v = %s(o, n);" % generic_getattr_cfunc)
if getattr_entry is not None:
code.putln(
"if (!v && PyErr_ExceptionMatches(PyExc_AttributeError)) {")
@@ -1976,18 +2155,24 @@
if env.is_c_class_scope and not env.pyfunc_entries:
return
binding = env.directives['binding']
+
code.putln("")
+ wrapper_code_writer = code.insertion_point()
+
code.putln(
"static PyMethodDef %s[] = {" % (
env.method_table_cname))
for entry in env.pyfunc_entries:
if not entry.fused_cfunction and not (binding and entry.is_overridable):
- code.put_pymethoddef(entry, ",")
+ code.put_pymethoddef(entry, ",", wrapper_code_writer=wrapper_code_writer)
code.putln(
"{0, 0, 0, 0}")
code.putln(
"};")
+ if wrapper_code_writer.getvalue():
+ wrapper_code_writer.putln("")
+
def generate_dict_getter_function(self, scope, code):
dict_attr = scope.lookup_here("__dict__")
if not dict_attr or not dict_attr.is_variable:
@@ -2063,7 +2248,7 @@
code.putln("if (0);") # so the first one can be "else if"
msvc_count = 0
for name, entry in sorted(env.entries.items()):
- if entry.is_cglobal and entry.used:
+ if entry.is_cglobal and entry.used and not entry.type.is_const:
msvc_count += 1
if msvc_count % 100 == 0:
code.putln("#ifdef _MSC_VER")
@@ -2102,22 +2287,46 @@
code.putln("return -1;")
code.putln("}")
code.putln("")
- code.putln(UtilityCode.load_cached("ImportStar", "ImportExport.c").impl)
+ code.putln(UtilityCode.load_as_string("ImportStar", "ImportExport.c")[1])
code.exit_cfunc_scope() # done with labels
def generate_module_init_func(self, imported_modules, env, code):
+ subfunction = self.mod_init_subfunction(self.scope, code)
+
code.enter_cfunc_scope(self.scope)
code.putln("")
- header2 = "PyMODINIT_FUNC init%s(void)" % env.module_name
- header3 = "PyMODINIT_FUNC PyInit_%s(void)" % env.module_name
+ code.putln(UtilityCode.load_as_string("PyModInitFuncType", "ModuleSetupCode.c")[0])
+ header2 = "__Pyx_PyMODINIT_FUNC init%s(void)" % env.module_name
+ header3 = "__Pyx_PyMODINIT_FUNC %s(void)" % self.mod_init_func_cname('PyInit', env)
code.putln("#if PY_MAJOR_VERSION < 3")
- code.putln("%s; /*proto*/" % header2)
+ # Optimise for small code size as the module init function is only executed once.
+ code.putln("%s CYTHON_SMALL_CODE; /*proto*/" % header2)
code.putln(header2)
code.putln("#else")
- code.putln("%s; /*proto*/" % header3)
+ code.putln("%s CYTHON_SMALL_CODE; /*proto*/" % header3)
code.putln(header3)
- code.putln("#endif")
+
+ # CPython 3.5+ supports multi-phase module initialisation (gives access to __spec__, __file__, etc.)
+ code.putln("#if CYTHON_PEP489_MULTI_PHASE_INIT")
+ code.putln("{")
+ code.putln("return PyModuleDef_Init(&%s);" % Naming.pymoduledef_cname)
+ code.putln("}")
+
+ mod_create_func = UtilityCode.load_as_string("ModuleCreationPEP489", "ModuleSetupCode.c")[1]
+ code.put(mod_create_func)
+
+ code.putln("")
+ # main module init code lives in Py_mod_exec function, not in PyInit function
+ code.putln("static CYTHON_SMALL_CODE int %s(PyObject *%s)" % (
+ self.mod_init_func_cname(Naming.pymodule_exec_func_cname, env),
+ Naming.pymodinit_module_arg))
+ code.putln("#endif") # PEP489
+
+ code.putln("#endif") # Py3
+
+ # start of module init/exec function (pre/post PEP 489)
code.putln("{")
+
tempdecl_code = code.insertion_point()
profile = code.globalstate.directives['profile']
@@ -2126,24 +2335,42 @@
code.globalstate.use_utility_code(UtilityCode.load_cached("Profile", "Profile.c"))
code.put_declare_refcount_context()
+ code.putln("#if CYTHON_PEP489_MULTI_PHASE_INIT")
+ # Most extension modules simply can't deal with it, and Cython isn't ready either.
+ # See issues listed here: https://docs.python.org/3/c-api/init.html#sub-interpreter-support
+ code.putln("if (%s) {" % Naming.module_cname)
+ # Hack: enforce single initialisation.
+ code.putln("if (%s == %s) return 0;" % (
+ Naming.module_cname,
+ Naming.pymodinit_module_arg,
+ ))
+ code.putln('PyErr_SetString(PyExc_RuntimeError,'
+ ' "Module \'%s\' has already been imported. Re-initialisation is not supported.");' %
+ env.module_name)
+ code.putln("return -1;")
+ code.putln("}")
+ code.putln("#elif PY_MAJOR_VERSION >= 3")
+ # Hack: enforce single initialisation also on reimports under different names on Python 3 (with PEP 3121/489).
+ code.putln("if (%s) return __Pyx_NewRef(%s);" % (
+ Naming.module_cname,
+ Naming.module_cname,
+ ))
+ code.putln("#endif")
+
if profile or linetrace:
tempdecl_code.put_trace_declarations()
code.put_trace_frame_init()
- code.putln("#if CYTHON_REFNANNY")
- code.putln("__Pyx_RefNanny = __Pyx_RefNannyImportAPI(\"refnanny\");")
- code.putln("if (!__Pyx_RefNanny) {")
- code.putln(" PyErr_Clear();")
- code.putln(" __Pyx_RefNanny = __Pyx_RefNannyImportAPI(\"Cython.Runtime.refnanny\");")
- code.putln(" if (!__Pyx_RefNanny)")
- code.putln(" Py_FatalError(\"failed to import 'refnanny' module\");")
- code.putln("}")
- code.putln("#endif")
+ refnanny_import_code = UtilityCode.load_as_string("ImportRefnannyAPI", "ModuleSetupCode.c")[1]
+ code.putln(refnanny_import_code.rstrip())
code.put_setup_refcount_context(header3)
env.use_utility_code(UtilityCode.load("CheckBinaryVersion", "ModuleSetupCode.c"))
code.put_error_if_neg(self.pos, "__Pyx_check_binary_version()")
+ code.putln("#ifdef __Pxy_PyFrame_Initialize_Offsets")
+ code.putln("__Pxy_PyFrame_Initialize_Offsets();")
+ code.putln("#endif")
code.putln("%s = PyTuple_New(0); %s" % (
Naming.empty_tuple, code.error_goto_if_null(Naming.empty_tuple, self.pos)))
code.putln("%s = PyBytes_FromStringAndSize(\"\", 0); %s" % (
@@ -2151,13 +2378,14 @@
code.putln("%s = PyUnicode_FromStringAndSize(\"\", 0); %s" % (
Naming.empty_unicode, code.error_goto_if_null(Naming.empty_unicode, self.pos)))
- for ext_type in ('CyFunction', 'FusedFunction', 'Coroutine', 'Generator', 'StopAsyncIteration'):
+ for ext_type in ('CyFunction', 'FusedFunction', 'Coroutine', 'Generator', 'AsyncGen', 'StopAsyncIteration'):
code.putln("#ifdef __Pyx_%s_USED" % ext_type)
code.put_error_if_neg(self.pos, "__pyx_%s_init()" % ext_type)
code.putln("#endif")
code.putln("/*--- Library function declarations ---*/")
- env.generate_library_function_declarations(code)
+ if env.directives['np_pythran']:
+ code.put_error_if_neg(self.pos, "_import_array()")
code.putln("/*--- Threads initialization code ---*/")
code.putln("#if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS")
@@ -2177,12 +2405,11 @@
code.put_error_if_neg(self.pos, "__Pyx_init_sys_getdefaultencoding_params()")
code.putln("#endif")
- __main__name = code.globalstate.get_py_string_const(
- EncodedString("__main__"), identifier=True)
code.putln("if (%s%s) {" % (Naming.module_is_main, self.full_module_name.replace('.', '__')))
- code.put_error_if_neg(self.pos, 'PyObject_SetAttrString(%s, "__name__", %s)' % (
+ code.put_error_if_neg(self.pos, 'PyObject_SetAttr(%s, %s, %s)' % (
env.module_cname,
- __main__name.cname))
+ code.intern_identifier(EncodedString("__name__")),
+ code.intern_identifier(EncodedString("__main__"))))
code.putln("}")
# set up __file__ and __path__, then add the module to sys.modules
@@ -2190,35 +2417,37 @@
if Options.cache_builtins:
code.putln("/*--- Builtin init code ---*/")
- code.put_error_if_neg(self.pos, "__Pyx_InitCachedBuiltins()")
+ code.put_error_if_neg(None, "__Pyx_InitCachedBuiltins()")
code.putln("/*--- Constants init code ---*/")
- code.put_error_if_neg(self.pos, "__Pyx_InitCachedConstants()")
+ code.put_error_if_neg(None, "__Pyx_InitCachedConstants()")
- code.putln("/*--- Global init code ---*/")
- self.generate_global_init_code(env, code)
+ code.putln("/*--- Global type/function init code ---*/")
- code.putln("/*--- Variable export code ---*/")
- self.generate_c_variable_export_code(env, code)
+ with subfunction("Global init code") as inner_code:
+ self.generate_global_init_code(env, inner_code)
- code.putln("/*--- Function export code ---*/")
- self.generate_c_function_export_code(env, code)
+ with subfunction("Variable export code") as inner_code:
+ self.generate_c_variable_export_code(env, inner_code)
- code.putln("/*--- Type init code ---*/")
- self.generate_type_init_code(env, code)
+ with subfunction("Function export code") as inner_code:
+ self.generate_c_function_export_code(env, inner_code)
- code.putln("/*--- Type import code ---*/")
- for module in imported_modules:
- self.generate_type_import_code_for_module(module, env, code)
+ with subfunction("Type init code") as inner_code:
+ self.generate_type_init_code(env, inner_code)
- code.putln("/*--- Variable import code ---*/")
- for module in imported_modules:
- self.generate_c_variable_import_code_for_module(module, env, code)
+ with subfunction("Type import code") as inner_code:
+ for module in imported_modules:
+ self.generate_type_import_code_for_module(module, env, inner_code)
- code.putln("/*--- Function import code ---*/")
- for module in imported_modules:
- self.specialize_fused_types(module)
- self.generate_c_function_import_code_for_module(module, env, code)
+ with subfunction("Variable import code") as inner_code:
+ for module in imported_modules:
+ self.generate_c_variable_import_code_for_module(module, env, inner_code)
+
+ with subfunction("Function import code") as inner_code:
+ for module in imported_modules:
+ self.specialize_fused_types(module)
+ self.generate_c_function_import_code_for_module(module, env, inner_code)
code.putln("/*--- Execution code ---*/")
code.mark_pos(None)
@@ -2251,10 +2480,9 @@
code.put_label(code.error_label)
for cname, type in code.funcstate.all_managed_temps():
code.put_xdecref(cname, type)
- # module state might not be ready for traceback generation with C-line handling yet
code.putln('if (%s) {' % env.module_cname)
code.putln('if (%s) {' % env.module_dict_cname)
- code.put_add_traceback("init %s" % env.qualified_name, include_cline=False)
+ code.put_add_traceback("init %s" % env.qualified_name)
code.globalstate.use_utility_code(Nodes.traceback_utility_code)
# Module reference and module dict are in global variables which might still be needed
# for cleanup, atexit code, etc., so leaking is better than crashing.
@@ -2262,7 +2490,7 @@
# user code in atexit or other global registries.
##code.put_decref_clear(env.module_dict_cname, py_object_type, nanny=False)
code.putln('}')
- code.put_decref_clear(env.module_cname, py_object_type, nanny=False)
+ code.put_decref_clear(env.module_cname, py_object_type, nanny=False, clear_before_decref=True)
code.putln('} else if (!PyErr_Occurred()) {')
code.putln('PyErr_SetString(PyExc_ImportError, "init %s");' % env.qualified_name)
code.putln('}')
@@ -2270,10 +2498,12 @@
code.put_finish_refcount_context()
- code.putln("#if PY_MAJOR_VERSION < 3")
- code.putln("return;")
- code.putln("#else")
+ code.putln("#if CYTHON_PEP489_MULTI_PHASE_INIT")
+ code.putln("return (%s != NULL) ? 0 : -1;" % env.module_cname)
+ code.putln("#elif PY_MAJOR_VERSION >= 3")
code.putln("return %s;" % env.module_cname)
+ code.putln("#else")
+ code.putln("return;")
code.putln("#endif")
code.putln('}')
@@ -2281,20 +2511,88 @@
code.exit_cfunc_scope()
+ def mod_init_subfunction(self, scope, orig_code):
+ """
+ Return a context manager that allows deviating the module init code generation
+ into a separate function and instead inserts a call to it.
+
+ Can be reused sequentially to create multiple functions.
+ The functions get inserted at the point where the context manager was created.
+ The call gets inserted where the context manager is used (on entry).
+ """
+ prototypes = orig_code.insertion_point()
+ prototypes.putln("")
+ function_code = orig_code.insertion_point()
+ function_code.putln("")
+
+ class ModInitSubfunction(object):
+ def __init__(self, code_type):
+ cname = '_'.join(code_type.lower().split())
+ assert re.match("^[a-z0-9_]+$", cname)
+ self.cfunc_name = "__Pyx_modinit_%s" % cname
+ self.description = code_type
+ self.tempdecl_code = None
+ self.call_code = None
+
+ def __enter__(self):
+ self.call_code = orig_code.insertion_point()
+ code = function_code
+ code.enter_cfunc_scope(scope)
+ prototypes.putln("static CYTHON_SMALL_CODE int %s(void); /*proto*/" % self.cfunc_name)
+ code.putln("static int %s(void) {" % self.cfunc_name)
+ code.put_declare_refcount_context()
+ self.tempdecl_code = code.insertion_point()
+ code.put_setup_refcount_context(self.cfunc_name)
+ # Leave a grepable marker that makes it easy to find the generator source.
+ code.putln("/*--- %s ---*/" % self.description)
+ return code
+
+ def __exit__(self, *args):
+ code = function_code
+ code.put_finish_refcount_context()
+ code.putln("return 0;")
+
+ self.tempdecl_code.put_temp_declarations(code.funcstate)
+ self.tempdecl_code = None
+
+ needs_error_handling = code.label_used(code.error_label)
+ if needs_error_handling:
+ code.put_label(code.error_label)
+ for cname, type in code.funcstate.all_managed_temps():
+ code.put_xdecref(cname, type)
+ code.put_finish_refcount_context()
+ code.putln("return -1;")
+ code.putln("}")
+ code.exit_cfunc_scope()
+ code.putln("")
+
+ if needs_error_handling:
+ self.call_code.use_label(orig_code.error_label)
+ self.call_code.putln("if (unlikely(%s() != 0)) goto %s;" % (
+ self.cfunc_name, orig_code.error_label))
+ else:
+ self.call_code.putln("(void)%s();" % self.cfunc_name)
+ self.call_code = None
+
+ return ModInitSubfunction
+
def generate_module_import_setup(self, env, code):
module_path = env.directives['set_initial_path']
if module_path == 'SOURCEFILE':
module_path = self.pos[0].filename
if module_path:
+ code.putln('if (!CYTHON_PEP489_MULTI_PHASE_INIT) {')
code.putln('if (PyObject_SetAttrString(%s, "__file__", %s) < 0) %s;' % (
env.module_cname,
code.globalstate.get_py_string_const(
EncodedString(decode_filename(module_path))).cname,
code.error_goto(self.pos)))
+ code.putln("}")
if env.is_package:
# set __path__ to mark the module as package
+ code.putln('if (!CYTHON_PEP489_MULTI_PHASE_INIT) {')
temp = code.funcstate.allocate_temp(py_object_type, True)
code.putln('%s = Py_BuildValue("[O]", %s); %s' % (
temp,
@@ -2308,10 +2606,12 @@
env.module_cname, temp, code.error_goto(self.pos)))
code.put_decref_clear(temp, py_object_type)
code.funcstate.release_temp(temp)
+ code.putln("}")
elif env.is_package:
# packages require __path__, so all we can do is try to figure
# out the module path at runtime by rerunning the import lookup
+ code.putln("if (!CYTHON_PEP489_MULTI_PHASE_INIT) {")
package_name, _ = self.full_module_name.rsplit('.', 1)
if '.' in package_name:
parent_name = '"%s"' % (package_name.rsplit('.', 1)[0],)
@@ -2325,6 +2625,7 @@
code.globalstate.get_py_string_const(
EncodedString(env.module_name)).cname),
self.pos))
+ code.putln("}")
# CPython may not have put us into sys.modules yet, but relative imports and reimports require it
fq_module_name = self.full_module_name
@@ -2404,11 +2705,11 @@
# if entry.type.is_pyobject and entry.used:
# code.putln("Py_DECREF(%s); %s = 0;" % (
# code.entry_as_pyobject(entry), entry.cname))
- code.putln('#if CYTHON_COMPILING_IN_PYPY')
- code.putln('Py_CLEAR(%s);' % Naming.builtins_cname)
- code.putln('#endif')
- code.put_decref_clear(env.module_dict_cname, py_object_type,
- nanny=False, clear_before_decref=True)
+ if Options.pre_import is not None:
+ code.put_decref_clear(Naming.preimport_cname, py_object_type,
+ nanny=False, clear_before_decref=True)
+ for cname in [env.module_dict_cname, Naming.cython_runtime_cname, Naming.builtins_cname]:
+ code.put_decref_clear(cname, py_object_type, nanny=False, clear_before_decref=True)
def generate_main_method(self, env, code):
module_is_main = "%s%s" % (Naming.module_is_main, self.full_module_name.replace('.', '__'))
@@ -2424,6 +2725,9 @@
main_method=Options.embed,
wmain_method=wmain))
+ def mod_init_func_cname(self, prefix, env):
+ return '%s_%s' % (prefix, env.module_name)
+
def generate_pymoduledef_struct(self, env, code):
if env.doc:
doc = "%s" % code.get_string_const(env.doc)
@@ -2436,18 +2740,35 @@
code.putln("")
code.putln("#if PY_MAJOR_VERSION >= 3")
+ code.putln("#if CYTHON_PEP489_MULTI_PHASE_INIT")
+ exec_func_cname = self.mod_init_func_cname(Naming.pymodule_exec_func_cname, env)
+ code.putln("static PyObject* %s(PyObject *spec, PyModuleDef *def); /*proto*/" %
+ Naming.pymodule_create_func_cname)
+ code.putln("static int %s(PyObject* module); /*proto*/" % exec_func_cname)
+
+ code.putln("static PyModuleDef_Slot %s[] = {" % Naming.pymoduledef_slots_cname)
+ code.putln("{Py_mod_create, (void*)%s}," % Naming.pymodule_create_func_cname)
+ code.putln("{Py_mod_exec, (void*)%s}," % exec_func_cname)
+ code.putln("{0, NULL}")
+ code.putln("};")
+ code.putln("#endif")
+
+ code.putln("")
code.putln("static struct PyModuleDef %s = {" % Naming.pymoduledef_cname)
- code.putln("#if PY_VERSION_HEX < 0x03020000")
- # fix C compiler warnings due to missing initialisers
- code.putln(" { PyObject_HEAD_INIT(NULL) NULL, 0, NULL },")
- code.putln("#else")
code.putln(" PyModuleDef_HEAD_INIT,")
- code.putln("#endif")
code.putln(' "%s",' % env.module_name)
code.putln(" %s, /* m_doc */" % doc)
+ code.putln("#if CYTHON_PEP489_MULTI_PHASE_INIT")
+ code.putln(" 0, /* m_size */")
+ code.putln("#else")
code.putln(" -1, /* m_size */")
+ code.putln("#endif")
code.putln(" %s /* m_methods */," % env.method_table_cname)
+ code.putln("#if CYTHON_PEP489_MULTI_PHASE_INIT")
+ code.putln(" %s, /* m_slots */" % Naming.pymoduledef_slots_cname)
+ code.putln("#else")
code.putln(" NULL, /* m_reload */")
+ code.putln("#endif")
code.putln(" NULL, /* m_traverse */")
code.putln(" NULL, /* m_clear */")
code.putln(" %s /* m_free */" % cleanup_func)
@@ -2461,6 +2782,13 @@
doc = "%s" % code.get_string_const(env.doc)
else:
doc = "0"
+
+ code.putln("#if CYTHON_PEP489_MULTI_PHASE_INIT")
+ code.putln("%s = %s;" % (
+ env.module_cname,
+ Naming.pymodinit_module_arg))
+ code.put_incref(env.module_cname, py_object_type, nanny=False)
+ code.putln("#else")
code.putln("#if PY_MAJOR_VERSION < 3")
code.putln(
'%s = Py_InitModule4("%s", %s, %s, 0, PYTHON_API_VERSION); Py_XINCREF(%s);' % (
@@ -2476,6 +2804,8 @@
Naming.pymoduledef_cname))
code.putln("#endif")
code.putln(code.error_goto_if_null(env.module_cname, self.pos))
+ code.putln("#endif") # CYTHON_PEP489_MULTI_PHASE_INIT
+
code.putln(
"%s = PyModule_GetDict(%s); %s" % (
env.module_dict_cname, env.module_cname,
@@ -2486,13 +2816,12 @@
'%s = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); %s' % (
Naming.builtins_cname,
code.error_goto_if_null(Naming.builtins_cname, self.pos)))
+ code.put_incref(Naming.builtins_cname, py_object_type, nanny=False)
code.putln(
'%s = PyImport_AddModule((char *) "cython_runtime"); %s' % (
Naming.cython_runtime_cname,
code.error_goto_if_null(Naming.cython_runtime_cname, self.pos)))
- code.putln('#if CYTHON_COMPILING_IN_PYPY')
- code.putln('Py_INCREF(%s);' % Naming.builtins_cname)
- code.putln('#endif')
+ code.put_incref(Naming.cython_runtime_cname, py_object_type, nanny=False)
code.putln(
'if (PyObject_SetAttrString(%s, "__builtins__", %s) < 0) %s;' % (
env.module_cname,
@@ -2504,6 +2833,7 @@
Naming.preimport_cname,
Options.pre_import,
code.error_goto_if_null(Naming.preimport_cname, self.pos)))
+ code.put_incref(Naming.preimport_cname, py_object_type, nanny=False)
def generate_global_init_code(self, env, code):
# Generate code to initialise global PyObject *
@@ -2560,6 +2890,8 @@
if entries:
env.use_utility_code(
UtilityCode.load_cached("FunctionExport", "ImportExport.c"))
+ # Note: while this looks like it could be more cheaply stored and read from a struct array,
+ # investigation shows that the resulting binary is smaller with repeated functions calls.
for entry in entries:
signature = entry.type.signature_string()
code.putln('if (__Pyx_ExportFunction("%s", (void (*)(void))%s, "%s") < 0) %s' % (
@@ -2572,9 +2904,10 @@
# Generate type import code for all exported extension types in
# an imported module.
#if module.c_class_entries:
- for entry in module.c_class_entries:
- if entry.defined_in_pxd:
- self.generate_type_import_code(env, entry.type, entry.pos, code)
+ with ModuleImportGenerator(code) as import_generator:
+ for entry in module.c_class_entries:
+ if entry.defined_in_pxd:
+ self.generate_type_import_code(env, entry.type, entry.pos, code, import_generator)
def specialize_fused_types(self, pxd_env):
"""
@@ -2596,12 +2929,10 @@
entries.append(entry)
if entries:
env.use_utility_code(
- UtilityCode.load_cached("ModuleImport", "ImportExport.c"))
- env.use_utility_code(
UtilityCode.load_cached("VoidPtrImport", "ImportExport.c"))
temp = code.funcstate.allocate_temp(py_object_type, manage_ref=True)
code.putln(
- '%s = __Pyx_ImportModule("%s"); if (!%s) %s' % (
+ '%s = PyImport_ImportModule("%s"); if (!%s) %s' % (
temp,
module.qualified_name,
temp,
@@ -2626,12 +2957,10 @@
entries.append(entry)
if entries:
env.use_utility_code(
- UtilityCode.load_cached("ModuleImport", "ImportExport.c"))
- env.use_utility_code(
UtilityCode.load_cached("FunctionImport", "ImportExport.c"))
temp = code.funcstate.allocate_temp(py_object_type, manage_ref=True)
code.putln(
- '%s = __Pyx_ImportModule("%s"); if (!%s) %s' % (
+ '%s = PyImport_ImportModule("%s"); if (!%s) %s' % (
temp,
module.qualified_name,
temp,
@@ -2649,30 +2978,33 @@
def generate_type_init_code(self, env, code):
# Generate type import code for extern extension types
# and type ready code for non-extern ones.
- for entry in env.c_class_entries:
- if entry.visibility == 'extern' and not entry.utility_code_definition:
- self.generate_type_import_code(env, entry.type, entry.pos, code)
- else:
- self.generate_base_type_import_code(env, entry, code)
- self.generate_exttype_vtable_init_code(entry, code)
- self.generate_type_ready_code(env, entry, code)
- self.generate_typeptr_assignment_code(entry, code)
+ with ModuleImportGenerator(code) as import_generator:
+ for entry in env.c_class_entries:
+ if entry.visibility == 'extern' and not entry.utility_code_definition:
+ self.generate_type_import_code(env, entry.type, entry.pos, code, import_generator)
+ else:
+ self.generate_base_type_import_code(env, entry, code, import_generator)
+ self.generate_exttype_vtable_init_code(entry, code)
+ if entry.type.early_init:
+ self.generate_type_ready_code(entry, code)
- def generate_base_type_import_code(self, env, entry, code):
+ def generate_base_type_import_code(self, env, entry, code, import_generator):
base_type = entry.type.base_type
if (base_type and base_type.module_name != env.qualified_name and not
base_type.is_builtin_type and not entry.utility_code_definition):
- self.generate_type_import_code(env, base_type, self.pos, code)
+ self.generate_type_import_code(env, base_type, self.pos, code, import_generator)
- def generate_type_import_code(self, env, type, pos, code):
+ def generate_type_import_code(self, env, type, pos, code, import_generator):
# If not already done, generate code to import the typeobject of an
# extension type defined in another module, and extract its C method
# table pointer if any.
if type in env.types_imported:
return
- env.use_utility_code(UtilityCode.load_cached("TypeImport", "ImportExport.c"))
- self.generate_type_import_call(type, code,
- code.error_goto_if_null(type.typeptr_cname, pos))
+ if type.name not in Code.ctypedef_builtins_map:
+ # see corresponding condition in generate_type_import_call() below!
+ code.globalstate.use_utility_code(
+ UtilityCode.load_cached("TypeImport", "ImportExport.c"))
+ self.generate_type_import_call(type, code, import_generator, error_pos=pos)
if type.vtabptr_cname:
code.globalstate.use_utility_code(
UtilityCode.load_cached('GetVTable', 'ImportExport.c'))
@@ -2683,9 +3015,7 @@
code.error_goto_if_null(type.vtabptr_cname, pos)))
env.types_imported.add(type)
- py3_type_name_map = {'str' : 'bytes', 'unicode' : 'str'}
-
- def generate_type_import_call(self, type, code, error_code):
+ def generate_type_import_call(self, type, code, import_generator, error_code=None, error_pos=None):
if type.typedef_flag:
objstruct = type.objstruct_cname
else:
@@ -2695,6 +3025,11 @@
condition = replacement = None
if module_name not in ('__builtin__', 'builtins'):
module_name = '"%s"' % module_name
+ elif type.name in Code.ctypedef_builtins_map:
+ # Fast path for special builtins, don't actually import
+ ctypename = Code.ctypedef_builtins_map[type.name]
+ code.putln('%s = %s;' % (type.typeptr_cname, ctypename))
+ return
else:
module_name = '__Pyx_BUILTIN_MODULE_NAME'
if type.name in Code.non_portable_builtins_map:
@@ -2703,8 +3038,14 @@
# Some builtin types have a tp_basicsize which differs from sizeof(...):
sizeof_objstruct = Code.basicsize_builtins_map[objstruct]
- code.put('%s = __Pyx_ImportType(%s,' % (
+ if not error_code:
+ assert error_pos is not None
+ error_code = code.error_goto(error_pos)
+
+ module = import_generator.imported_module(module_name, error_code)
+ code.put('%s = __Pyx_ImportType(%s, %s,' % (
type.typeptr_cname,
+ module,
module_name))
if condition and replacement:
@@ -2720,7 +3061,7 @@
if sizeof_objstruct != objstruct:
if not condition:
code.putln("") # start in new line
- code.putln("#if CYTHON_COMPILING_IN_PYPY")
+ code.putln("#if defined(PYPY_VERSION_NUM) && PYPY_VERSION_NUM < 0x050B0000")
code.putln('sizeof(%s),' % objstruct)
code.putln("#else")
code.putln('sizeof(%s),' % sizeof_objstruct)
@@ -2728,102 +3069,20 @@
else:
code.put('sizeof(%s), ' % objstruct)
- code.putln('%i); %s' % (
- not type.is_external or type.is_subclassed,
- error_code))
-
- def generate_type_ready_code(self, env, entry, code):
- # Generate a call to PyType_Ready for an extension
- # type defined in this module.
- type = entry.type
- typeobj_cname = type.typeobj_cname
- scope = type.scope
- if scope: # could be None if there was an error
- if entry.visibility != 'extern':
- for slot in TypeSlots.slot_table:
- slot.generate_dynamic_init_code(scope, code)
- code.putln(
- "if (PyType_Ready(&%s) < 0) %s" % (
- typeobj_cname,
- code.error_goto(entry.pos)))
- # Don't inherit tp_print from builtin types, restoring the
- # behavior of using tp_repr or tp_str instead.
- code.putln("%s.tp_print = 0;" % typeobj_cname)
- # Fix special method docstrings. This is a bit of a hack, but
- # unless we let PyType_Ready create the slot wrappers we have
- # a significant performance hit. (See trac #561.)
- for func in entry.type.scope.pyfunc_entries:
- is_buffer = func.name in ('__getbuffer__', '__releasebuffer__')
- if (func.is_special and Options.docstrings and
- func.wrapperbase_cname and not is_buffer):
- slot = TypeSlots.method_name_to_slot[func.name]
- preprocessor_guard = slot.preprocessor_guard_code()
- if preprocessor_guard:
- code.putln(preprocessor_guard)
- code.putln('#if CYTHON_COMPILING_IN_CPYTHON')
- code.putln("{")
- code.putln(
- 'PyObject *wrapper = PyObject_GetAttrString((PyObject *)&%s, "%s"); %s' % (
- typeobj_cname,
- func.name,
- code.error_goto_if_null('wrapper', entry.pos)))
- code.putln(
- "if (Py_TYPE(wrapper) == &PyWrapperDescr_Type) {")
- code.putln(
- "%s = *((PyWrapperDescrObject *)wrapper)->d_base;" % (
- func.wrapperbase_cname))
- code.putln(
- "%s.doc = %s;" % (func.wrapperbase_cname, func.doc_cname))
- code.putln(
- "((PyWrapperDescrObject *)wrapper)->d_base = &%s;" % (
- func.wrapperbase_cname))
- code.putln("}")
- code.putln("}")
- code.putln('#endif')
- if preprocessor_guard:
- code.putln('#endif')
- if type.vtable_cname:
- code.putln(
- "if (__Pyx_SetVtable(%s.tp_dict, %s) < 0) %s" % (
- typeobj_cname,
- type.vtabptr_cname,
- code.error_goto(entry.pos)))
- code.globalstate.use_utility_code(
- UtilityCode.load_cached('SetVTable', 'ImportExport.c'))
- if not type.scope.is_internal and not type.scope.directives['internal']:
- # scope.is_internal is set for types defined by
- # Cython (such as closures), the 'internal'
- # directive is set by users
- code.putln(
- 'if (PyObject_SetAttrString(%s, "%s", (PyObject *)&%s) < 0) %s' % (
- Naming.module_cname,
- scope.class_name,
- typeobj_cname,
- code.error_goto(entry.pos)))
- weakref_entry = scope.lookup_here("__weakref__") if not scope.is_closure_class_scope else None
- if weakref_entry:
- if weakref_entry.type is py_object_type:
- tp_weaklistoffset = "%s.tp_weaklistoffset" % typeobj_cname
- if type.typedef_flag:
- objstruct = type.objstruct_cname
- else:
- objstruct = "struct %s" % type.objstruct_cname
- code.putln("if (%s == 0) %s = offsetof(%s, %s);" % (
- tp_weaklistoffset,
- tp_weaklistoffset,
- objstruct,
- weakref_entry.cname))
- else:
- error(weakref_entry.pos, "__weakref__ slot must be of type 'object'")
- if scope.lookup_here("__reduce_cython__") if not scope.is_closure_class_scope else None:
- # Unfortunately, we cannot reliably detect whether a
- # superclass defined __reduce__ at compile time, so we must
- # do so at runtime.
- code.globalstate.use_utility_code(
- UtilityCode.load_cached('SetupReduce', 'ExtensionTypes.c'))
- code.putln('if (__Pyx_setup_reduce((PyObject*)&%s) < 0) %s' % (
- typeobj_cname,
- code.error_goto(entry.pos)))
+ # check_size
+ if type.check_size and type.check_size in ('error', 'warn', 'ignore'):
+ check_size = type.check_size
+ elif not type.is_external or type.is_subclassed:
+ check_size = 'error'
+ else:
+ raise RuntimeError("invalid value for check_size '%s' when compiling %s.%s" % (
+ type.check_size, module_name, type.name))
+ code.putln('__Pyx_ImportType_CheckSize_%s);' % check_size.title())
+
+ code.putln(' if (!%s) %s' % (type.typeptr_cname, error_code))
+
+ def generate_type_ready_code(self, entry, code):
+ Nodes.CClassDefNode.generate_type_ready_code(entry, code)
def generate_exttype_vtable_init_code(self, entry, code):
# Generate code to initialise the C method table of an
@@ -2854,14 +3113,42 @@
cast,
meth_entry.func_cname))
- def generate_typeptr_assignment_code(self, entry, code):
- # Generate code to initialise the typeptr of an extension
- # type defined in this module to point to its type object.
- type = entry.type
- if type.typeobj_cname:
- code.putln(
- "%s = &%s;" % (
- type.typeptr_cname, type.typeobj_cname))
+
+class ModuleImportGenerator(object):
+ """
+ Helper to generate module import while importing external types.
+ This is used to avoid excessive re-imports of external modules when multiple types are looked up.
+ """
+ def __init__(self, code, imported_modules=None):
+ self.code = code
+ self.imported = {}
+ if imported_modules:
+ for name, cname in imported_modules.items():
+ self.imported['"%s"' % name] = cname
+ self.temps = [] # remember original import order for freeing
+
+ def imported_module(self, module_name_string, error_code):
+ if module_name_string in self.imported:
+ return self.imported[module_name_string]
+
+ code = self.code
+ temp = code.funcstate.allocate_temp(py_object_type, manage_ref=True)
+ self.temps.append(temp)
+ code.putln('%s = PyImport_ImportModule(%s); if (unlikely(!%s)) %s' % (
+ temp, module_name_string, temp, error_code))
+ code.put_gotref(temp)
+ self.imported[module_name_string] = temp
+ return temp
+
+ def __enter__(self):
+ return self
+
+ def __exit__(self, *exc):
+ code = self.code
+ for temp in self.temps:
+ code.put_decref_clear(temp, py_object_type)
+ code.funcstate.release_temp(temp)
+
def generate_cfunction_declaration(entry, env, code, definition):
from_cy_utility = entry.used and entry.utility_code_definition
diff -Nru cython-0.26.1/Cython/Compiler/Naming.py cython-0.29.14/Cython/Compiler/Naming.py
--- cython-0.26.1/Cython/Compiler/Naming.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Naming.py 2019-02-08 19:14:39.000000000 +0000
@@ -28,6 +28,7 @@
py_const_prefix = pyrex_prefix + "kp_"
label_prefix = pyrex_prefix + "L"
pymethdef_prefix = pyrex_prefix + "mdef_"
+method_wrapper_prefix = pyrex_prefix + "specialmethod_"
methtab_prefix = pyrex_prefix + "methods_"
memtab_prefix = pyrex_prefix + "members_"
objstruct_prefix = pyrex_prefix + "obj_"
@@ -101,6 +102,10 @@
print_function_kwargs = pyrex_prefix + "print_kwargs"
cleanup_cname = pyrex_prefix + "module_cleanup"
pymoduledef_cname = pyrex_prefix + "moduledef"
+pymoduledef_slots_cname = pyrex_prefix + "moduledef_slots"
+pymodinit_module_arg = pyrex_prefix + "pyinit_module"
+pymodule_create_func_cname = pyrex_prefix + "pymod_create"
+pymodule_exec_func_cname = pyrex_prefix + "pymod_exec"
optional_args_cname = pyrex_prefix + "optional_args"
import_star = pyrex_prefix + "import_star"
import_star_set = pyrex_prefix + "import_star_set"
@@ -112,6 +117,9 @@
binding_cfunc = pyrex_prefix + "binding_PyCFunctionType"
fused_func_prefix = pyrex_prefix + 'fuse_'
quick_temp_cname = pyrex_prefix + "temp" # temp variable for quick'n'dirty temping
+tp_dict_version_temp = pyrex_prefix + "tp_dict_version"
+obj_dict_version_temp = pyrex_prefix + "obj_dict_version"
+type_dict_guard_temp = pyrex_prefix + "type_dict_guard"
cython_runtime_cname = pyrex_prefix + "cython_runtime"
global_code_object_cache_find = pyrex_prefix + 'find_code_object'
diff -Nru cython-0.26.1/Cython/Compiler/Nodes.py cython-0.29.14/Cython/Compiler/Nodes.py
--- cython-0.26.1/Cython/Compiler/Nodes.py 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Nodes.py 2019-06-02 09:26:16.000000000 +0000
@@ -68,10 +68,13 @@
return doc
-def _analyse_signature_annotation(annotation, env):
+def analyse_type_annotation(annotation, env, assigned_value=None):
base_type = None
+ is_ambiguous = False
explicit_pytype = explicit_ctype = False
if annotation.is_dict_literal:
+ warning(annotation.pos,
+ "Dicts should no longer be used as type annotations. Use 'cython.int' etc. directly.")
for name, value in annotation.key_value_pairs:
if not name.is_string_literal:
continue
@@ -85,14 +88,30 @@
if explicit_pytype and explicit_ctype:
warning(annotation.pos, "Duplicate type declarations found in signature annotation")
arg_type = annotation.analyse_as_type(env)
+ if annotation.is_name and not annotation.cython_attribute and annotation.name in ('int', 'long', 'float'):
+ # Map builtin numeric Python types to C types in safe cases.
+ if assigned_value is not None and arg_type is not None and not arg_type.is_pyobject:
+ assigned_type = assigned_value.infer_type(env)
+ if assigned_type and assigned_type.is_pyobject:
+ # C type seems unsafe, e.g. due to 'None' default value => ignore annotation type
+ is_ambiguous = True
+ arg_type = None
+ # ignore 'int' and require 'cython.int' to avoid unsafe integer declarations
+ if arg_type in (PyrexTypes.c_long_type, PyrexTypes.c_int_type, PyrexTypes.c_float_type):
+ arg_type = PyrexTypes.c_double_type if annotation.name == 'float' else py_object_type
+ elif arg_type is not None and annotation.is_string_literal:
+ warning(annotation.pos,
+ "Strings should no longer be used for type declarations. Use 'cython.int' etc. directly.")
if arg_type is not None:
if explicit_pytype and not explicit_ctype and not arg_type.is_pyobject:
warning(annotation.pos,
"Python type declaration in signature annotation does not refer to a Python type")
base_type = CAnalysedBaseTypeNode(
annotation.pos, type=arg_type, is_arg=True)
+ elif is_ambiguous:
+ warning(annotation.pos, "Ambiguous types in annotation, ignoring")
else:
- warning(annotation.pos, "Unknown type declaration found in signature annotation")
+ warning(annotation.pos, "Unknown type declaration in annotation, ignoring")
return base_type, arg_type
@@ -188,6 +207,9 @@
# can either contain a single node or a list of nodes. See Visitor.py.
child_attrs = None
+ # Subset of attributes that are evaluated in the outer scope (e.g. function default arguments).
+ outer_attrs = None
+
cf_state = None
# This may be an additional (or 'actual') type that will be checked when
@@ -203,6 +225,7 @@
gil_message = "Operation"
nogil_check = None
+ in_nogil_context = False # For use only during code generation.
def gil_error(self, env=None):
error(self.pos, "%s not allowed without gil" % self.gil_message)
@@ -451,19 +474,30 @@
class CDefExternNode(StatNode):
- # include_file string or None
- # body StatNode
+ # include_file string or None
+ # verbatim_include string or None
+ # body StatListNode
child_attrs = ["body"]
def analyse_declarations(self, env):
- if self.include_file:
- env.add_include_file(self.include_file)
old_cinclude_flag = env.in_cinclude
env.in_cinclude = 1
self.body.analyse_declarations(env)
env.in_cinclude = old_cinclude_flag
+ if self.include_file or self.verbatim_include:
+ # Determine whether include should be late
+ stats = self.body.stats
+ if not env.directives['preliminary_late_includes_cy28']:
+ late = False
+ elif not stats:
+ # Special case: empty 'cdef extern' blocks are early
+ late = False
+ else:
+ late = all(isinstance(node, CVarDefNode) for node in stats)
+ env.add_include_file(self.include_file, self.verbatim_include, late)
+
def analyse_expressions(self, env):
return self
@@ -505,7 +539,7 @@
default = None
- def analyse(self, base_type, env, nonempty=0):
+ def analyse(self, base_type, env, nonempty=0, visibility=None, in_pxd=False):
if nonempty and self.name == '':
# May have mistaken the name for the type.
if base_type.is_ptr or base_type.is_array or base_type.is_buffer:
@@ -531,11 +565,11 @@
def analyse_templates(self):
return self.base.analyse_templates()
- def analyse(self, base_type, env, nonempty=0):
+ def analyse(self, base_type, env, nonempty=0, visibility=None, in_pxd=False):
if base_type.is_pyobject:
error(self.pos, "Pointer base type cannot be a Python object")
ptr_type = PyrexTypes.c_ptr_type(base_type)
- return self.base.analyse(ptr_type, env, nonempty=nonempty)
+ return self.base.analyse(ptr_type, env, nonempty=nonempty, visibility=visibility, in_pxd=in_pxd)
class CReferenceDeclaratorNode(CDeclaratorNode):
@@ -546,11 +580,11 @@
def analyse_templates(self):
return self.base.analyse_templates()
- def analyse(self, base_type, env, nonempty=0):
+ def analyse(self, base_type, env, nonempty=0, visibility=None, in_pxd=False):
if base_type.is_pyobject:
error(self.pos, "Reference base type cannot be a Python object")
ref_type = PyrexTypes.c_ref_type(base_type)
- return self.base.analyse(ref_type, env, nonempty=nonempty)
+ return self.base.analyse(ref_type, env, nonempty=nonempty, visibility=visibility, in_pxd=in_pxd)
class CArrayDeclaratorNode(CDeclaratorNode):
@@ -559,7 +593,7 @@
child_attrs = ["base", "dimension"]
- def analyse(self, base_type, env, nonempty=0):
+ def analyse(self, base_type, env, nonempty=0, visibility=None, in_pxd=False):
if (base_type.is_cpp_class and base_type.is_template_type()) or base_type.is_cfunction:
from .ExprNodes import TupleNode
if isinstance(self.dimension, TupleNode):
@@ -573,7 +607,7 @@
base_type = error_type
else:
base_type = base_type.specialize_here(self.pos, values)
- return self.base.analyse(base_type, env, nonempty=nonempty)
+ return self.base.analyse(base_type, env, nonempty=nonempty, visibility=visibility, in_pxd=in_pxd)
if self.dimension:
self.dimension = self.dimension.analyse_const_expression(env)
if not self.dimension.type.is_int:
@@ -594,7 +628,7 @@
if base_type.is_cfunction:
error(self.pos, "Array element cannot be a function")
array_type = PyrexTypes.c_array_type(base_type, size)
- return self.base.analyse(array_type, env, nonempty=nonempty)
+ return self.base.analyse(array_type, env, nonempty=nonempty, visibility=visibility, in_pxd=in_pxd)
class CFuncDeclaratorNode(CDeclaratorNode):
@@ -637,7 +671,7 @@
else:
return None
- def analyse(self, return_type, env, nonempty=0, directive_locals=None):
+ def analyse(self, return_type, env, nonempty=0, directive_locals=None, visibility=None, in_pxd=False):
if directive_locals is None:
directive_locals = {}
if nonempty:
@@ -689,6 +723,16 @@
and self.exception_check != '+'):
error(self.pos, "Exception clause not allowed for function returning Python object")
else:
+ if self.exception_value is None and self.exception_check and self.exception_check != '+':
+ # Use an explicit exception return value to speed up exception checks.
+ # Even if it is not declared, we can use the default exception value of the return type,
+ # unless the function is some kind of external function that we do not control.
+ if return_type.exception_value is not None and (visibility != 'extern' and not in_pxd):
+ # Extension types are more difficult because the signature must match the base type signature.
+ if not env.is_c_class_scope:
+ from .ExprNodes import ConstNode
+ self.exception_value = ConstNode(
+ self.pos, value=return_type.exception_value, type=return_type)
if self.exception_value:
self.exception_value = self.exception_value.analyse_const_expression(env)
if self.exception_check == '+':
@@ -697,9 +741,11 @@
and not exc_val_type.is_pyobject
and not (exc_val_type.is_cfunction
and not exc_val_type.return_type.is_pyobject
- and not exc_val_type.args)):
+ and not exc_val_type.args)
+ and not (exc_val_type == PyrexTypes.c_char_type
+ and self.exception_value.value == '*')):
error(self.exception_value.pos,
- "Exception value must be a Python exception or cdef function with no arguments.")
+ "Exception value must be a Python exception or cdef function with no arguments or *.")
exc_val = self.exception_value
else:
self.exception_value = self.exception_value.coerce_to(
@@ -743,7 +789,7 @@
error(self.pos, "cannot have both '%s' and '%s' "
"calling conventions" % (current, callspec))
func_type.calling_convention = callspec
- return self.base.analyse(func_type, env)
+ return self.base.analyse(func_type, env, visibility=visibility, in_pxd=in_pxd)
def declare_optional_arg_struct(self, func_type, env, fused_cname=None):
"""
@@ -757,7 +803,7 @@
scope.declare_var(arg_count_member, PyrexTypes.c_int_type, self.pos)
for arg in func_type.args[len(func_type.args) - self.optional_arg_count:]:
- scope.declare_var(arg.name, arg.type, arg.pos, allow_pyobject=1)
+ scope.declare_var(arg.name, arg.type, arg.pos, allow_pyobject=True, allow_memoryview=True)
struct_cname = env.mangle(Naming.opt_arg_prefix, self.base.name)
@@ -783,12 +829,12 @@
child_attrs = ["base"]
- def analyse(self, base_type, env, nonempty=0):
+ def analyse(self, base_type, env, nonempty=0, visibility=None, in_pxd=False):
if base_type.is_pyobject:
error(self.pos,
"Const base type cannot be a Python object")
const = PyrexTypes.c_const_type(base_type)
- return self.base.analyse(const, env, nonempty=nonempty)
+ return self.base.analyse(const, env, nonempty=nonempty, visibility=visibility, in_pxd=in_pxd)
class CArgDeclNode(Node):
@@ -808,6 +854,7 @@
# is_dynamic boolean Non-literal arg stored inside CyFunction
child_attrs = ["base_type", "declarator", "default", "annotation"]
+ outer_attrs = ["default", "annotation"]
is_self_arg = 0
is_type_arg = 0
@@ -858,7 +905,8 @@
base_type = base_type.base_type
# inject type declaration from annotations
- if self.annotation and env.directives['annotation_typing'] and self.base_type.name is None:
+ # this is called without 'env' by AdjustDefByDirectives transform before declaration analysis
+ if self.annotation and env and env.directives['annotation_typing'] and self.base_type.name is None:
arg_type = self.inject_type_from_annotations(env)
if arg_type is not None:
base_type = arg_type
@@ -870,7 +918,7 @@
annotation = self.annotation
if not annotation:
return None
- base_type, arg_type = _analyse_signature_annotation(annotation, env)
+ base_type, arg_type = analyse_type_annotation(annotation, env, assigned_value=self.default)
if base_type is not None:
self.base_type = base_type
return arg_type
@@ -1107,7 +1155,7 @@
type = template_node.analyse_as_type(env)
if type is None:
error(template_node.pos, "unknown type in template argument")
- return error_type
+ type = error_type
template_types.append(type)
self.type = base_type.specialize_here(self.pos, template_types)
@@ -1304,9 +1352,11 @@
if create_extern_wrapper:
declarator.overridable = False
if isinstance(declarator, CFuncDeclaratorNode):
- name_declarator, type = declarator.analyse(base_type, env, directive_locals=self.directive_locals)
+ name_declarator, type = declarator.analyse(
+ base_type, env, directive_locals=self.directive_locals, visibility=visibility, in_pxd=self.in_pxd)
else:
- name_declarator, type = declarator.analyse(base_type, env)
+ name_declarator, type = declarator.analyse(
+ base_type, env, visibility=visibility, in_pxd=self.in_pxd)
if not type.is_complete():
if not (self.visibility == 'extern' and type.is_array or type.is_memoryviewslice):
error(declarator.pos, "Variable type '%s' is incomplete" % type)
@@ -1558,7 +1608,8 @@
def analyse_declarations(self, env):
base = self.base_type.analyse(env)
- name_declarator, type = self.declarator.analyse(base, env)
+ name_declarator, type = self.declarator.analyse(
+ base, env, visibility=self.visibility, in_pxd=self.in_pxd)
name = name_declarator.name
cname = name_declarator.cname
@@ -1630,10 +1681,18 @@
elif default_seen:
error(arg.pos, "Non-default argument following default argument")
+ def analyse_annotation(self, env, annotation):
+ # Annotations can not only contain valid Python expressions but arbitrary type references.
+ if annotation is None:
+ return None
+ if not env.directives['annotation_typing'] or annotation.analyse_as_type(env) is None:
+ annotation = annotation.analyse_types(env)
+ return annotation
+
def analyse_annotations(self, env):
for arg in self.args:
if arg.annotation:
- arg.annotation = arg.annotation.analyse_types(env)
+ arg.annotation = self.analyse_annotation(env, arg.annotation)
def align_argument_type(self, env, arg):
# @cython.locals()
@@ -1774,9 +1833,6 @@
tempvardecl_code = code.insertion_point()
self.generate_keyword_list(code)
- # ----- Extern library function declarations
- lenv.generate_library_function_declarations(code)
-
# ----- GIL acquisition
acquire_gil = self.acquire_gil
@@ -1816,6 +1872,10 @@
code_object = self.code_object.calculate_result_code(code) if self.code_object else None
code.put_trace_frame_init(code_object)
+ # ----- Special check for getbuffer
+ if is_getbuffer_slot:
+ self.getbuffer_check(code)
+
# ----- set up refnanny
if use_refnanny:
tempvardecl_code.put_declare_refcount_context()
@@ -1889,7 +1949,7 @@
code.put_var_incref(entry)
# Note: defaults are always incref-ed. For def functions, we
- # we aquire arguments from object converstion, so we have
+ # we acquire arguments from object conversion, so we have
# new references. If we are a cdef function, we need to
# incref our arguments
elif is_cdef and entry.type.is_memoryviewslice and len(entry.cf_assignments) > 1:
@@ -1993,7 +2053,8 @@
if err_val is None and default_retval:
err_val = default_retval
if err_val is not None:
- code.putln("%s = %s;" % (Naming.retval_cname, err_val))
+ if err_val != Naming.retval_cname:
+ code.putln("%s = %s;" % (Naming.retval_cname, err_val))
elif not self.return_type.is_void:
code.putln("__Pyx_pretend_to_initialize(&%s);" % Naming.retval_cname)
@@ -2117,7 +2178,10 @@
error(arg.pos, "Invalid use of 'void'")
elif not arg.type.is_complete() and not (arg.type.is_array or arg.type.is_memoryviewslice):
error(arg.pos, "Argument type '%s' is incomplete" % arg.type)
- return env.declare_arg(arg.name, arg.type, arg.pos)
+ entry = env.declare_arg(arg.name, arg.type, arg.pos)
+ if arg.annotation:
+ entry.annotation = arg.annotation
+ return entry
def generate_arg_type_test(self, arg, code):
# Generate type test for one argument.
@@ -2163,31 +2227,59 @@
#
# Special code for the __getbuffer__ function
#
- def getbuffer_init(self, code):
- info = self.local_scope.arg_entries[1].cname
- # Python 3.0 betas have a bug in memoryview which makes it call
- # getbuffer with a NULL parameter. For now we work around this;
- # the following block should be removed when this bug is fixed.
- code.putln("if (%s != NULL) {" % info)
- code.putln("%s->obj = Py_None; __Pyx_INCREF(Py_None);" % info)
- code.put_giveref("%s->obj" % info) # Do not refnanny object within structs
+ def _get_py_buffer_info(self):
+ py_buffer = self.local_scope.arg_entries[1]
+ try:
+ # Check builtin definition of struct Py_buffer
+ obj_type = py_buffer.type.base_type.scope.entries['obj'].type
+ except (AttributeError, KeyError):
+ # User code redeclared struct Py_buffer
+ obj_type = None
+ return py_buffer, obj_type
+
+ # Old Python 3 used to support write-locks on buffer-like objects by
+ # calling PyObject_GetBuffer() with a view==NULL parameter. This obscure
+ # feature is obsolete, it was almost never used (only one instance in
+ # `Modules/posixmodule.c` in Python 3.1) and it is now officially removed
+ # (see bpo-14203). We add an extra check here to prevent legacy code from
+ # from trying to use the feature and prevent segmentation faults.
+ def getbuffer_check(self, code):
+ py_buffer, _ = self._get_py_buffer_info()
+ view = py_buffer.cname
+ code.putln("if (%s == NULL) {" % view)
+ code.putln("PyErr_SetString(PyExc_BufferError, "
+ "\"PyObject_GetBuffer: view==NULL argument is obsolete\");")
+ code.putln("return -1;")
code.putln("}")
+ def getbuffer_init(self, code):
+ py_buffer, obj_type = self._get_py_buffer_info()
+ view = py_buffer.cname
+ if obj_type and obj_type.is_pyobject:
+ code.put_init_to_py_none("%s->obj" % view, obj_type)
+ code.put_giveref("%s->obj" % view) # Do not refnanny object within structs
+ else:
+ code.putln("%s->obj = NULL;" % view)
+
def getbuffer_error_cleanup(self, code):
- info = self.local_scope.arg_entries[1].cname
- code.putln("if (%s != NULL && %s->obj != NULL) {"
- % (info, info))
- code.put_gotref("%s->obj" % info)
- code.putln("__Pyx_DECREF(%s->obj); %s->obj = NULL;"
- % (info, info))
- code.putln("}")
+ py_buffer, obj_type = self._get_py_buffer_info()
+ view = py_buffer.cname
+ if obj_type and obj_type.is_pyobject:
+ code.putln("if (%s->obj != NULL) {" % view)
+ code.put_gotref("%s->obj" % view)
+ code.put_decref_clear("%s->obj" % view, obj_type)
+ code.putln("}")
+ else:
+ code.putln("Py_CLEAR(%s->obj);" % view)
def getbuffer_normal_cleanup(self, code):
- info = self.local_scope.arg_entries[1].cname
- code.putln("if (%s != NULL && %s->obj == Py_None) {" % (info, info))
- code.put_gotref("Py_None")
- code.putln("__Pyx_DECREF(Py_None); %s->obj = NULL;" % info)
- code.putln("}")
+ py_buffer, obj_type = self._get_py_buffer_info()
+ view = py_buffer.cname
+ if obj_type and obj_type.is_pyobject:
+ code.putln("if (%s->obj == Py_None) {" % view)
+ code.put_gotref("%s->obj" % view)
+ code.put_decref_clear("%s->obj" % view, obj_type)
+ code.putln("}")
def get_preprocessor_guard(self):
if not self.entry.is_special:
@@ -2250,7 +2342,7 @@
self.is_c_class_method = env.is_c_class_scope
if self.directive_locals is None:
self.directive_locals = {}
- self.directive_locals.update(env.directives['locals'])
+ self.directive_locals.update(env.directives.get('locals', {}))
if self.directive_returns is not None:
base_type = self.directive_returns.analyse_as_type(env)
if base_type is None:
@@ -2263,10 +2355,10 @@
if isinstance(self.declarator, CFuncDeclaratorNode):
name_declarator, type = self.declarator.analyse(
base_type, env, nonempty=2 * (self.body is not None),
- directive_locals=self.directive_locals)
+ directive_locals=self.directive_locals, visibility=self.visibility)
else:
name_declarator, type = self.declarator.analyse(
- base_type, env, nonempty=2 * (self.body is not None))
+ base_type, env, nonempty=2 * (self.body is not None), visibility=self.visibility)
if not type.is_cfunction:
error(self.pos, "Suite attached to non-function declaration")
# Remember the actual type according to the function header
@@ -2561,6 +2653,9 @@
self.generate_arg_none_check(arg, code)
def generate_execution_code(self, code):
+ if code.globalstate.directives['linetrace']:
+ code.mark_pos(self.pos)
+ code.putln("") # generate line tracing code
super(CFuncDefNode, self).generate_execution_code(code)
if self.py_func_stat:
self.py_func_stat.generate_execution_code(code)
@@ -2652,6 +2747,7 @@
# decorator_indirection IndirectionNode Used to remove __Pyx_Method_ClassMethod for fused functions
child_attrs = ["args", "star_arg", "starstar_arg", "body", "decorators", "return_type_annotation"]
+ outer_attrs = ["decorators", "return_type_annotation"]
is_staticmethod = False
is_classmethod = False
@@ -2692,26 +2788,30 @@
self.num_required_kw_args = rk
self.num_required_args = r
- def as_cfunction(self, cfunc=None, scope=None, overridable=True, returns=None, modifiers=None):
+ def as_cfunction(self, cfunc=None, scope=None, overridable=True, returns=None, except_val=None, modifiers=None,
+ nogil=False, with_gil=False):
if self.star_arg:
error(self.star_arg.pos, "cdef function cannot have star argument")
if self.starstar_arg:
error(self.starstar_arg.pos, "cdef function cannot have starstar argument")
+ exception_value, exception_check = except_val or (None, False)
+
if cfunc is None:
cfunc_args = []
for formal_arg in self.args:
name_declarator, type = formal_arg.analyse(scope, nonempty=1)
cfunc_args.append(PyrexTypes.CFuncTypeArg(name=name_declarator.name,
cname=None,
+ annotation=formal_arg.annotation,
type=py_object_type,
pos=formal_arg.pos))
cfunc_type = PyrexTypes.CFuncType(return_type=py_object_type,
args=cfunc_args,
has_varargs=False,
exception_value=None,
- exception_check=False,
- nogil=False,
- with_gil=False,
+ exception_check=exception_check,
+ nogil=nogil,
+ with_gil=with_gil,
is_overridable=overridable)
cfunc = CVarDefNode(self.pos, type=cfunc_type)
else:
@@ -2727,11 +2827,10 @@
if type is None or type is PyrexTypes.py_object_type:
formal_arg.type = type_arg.type
formal_arg.name_declarator = name_declarator
- from . import ExprNodes
- if cfunc_type.exception_value is None:
- exception_value = None
- else:
- exception_value = ExprNodes.ConstNode(
+
+ if exception_value is None and cfunc_type.exception_value is not None:
+ from .ExprNodes import ConstNode
+ exception_value = ConstNode(
self.pos, value=cfunc_type.exception_value, type=cfunc_type.return_type)
declarator = CFuncDeclaratorNode(self.pos,
base=CNameDeclaratorNode(self.pos, name=self.name, cname=None),
@@ -2796,7 +2895,7 @@
# if a signature annotation provides a more specific return object type, use it
if self.return_type is py_object_type and self.return_type_annotation:
if env.directives['annotation_typing'] and not self.entry.is_special:
- _, return_type = _analyse_signature_annotation(self.return_type_annotation, env)
+ _, return_type = analyse_type_annotation(self.return_type_annotation, env)
if return_type and return_type.is_pyobject:
self.return_type = return_type
@@ -2813,7 +2912,7 @@
self.py_wrapper.analyse_declarations(env)
def analyse_argument_types(self, env):
- self.directive_locals = env.directives['locals']
+ self.directive_locals = env.directives.get('locals', {})
allow_none_for_extension_args = env.directives['allow_none_for_extension_args']
f2s = env.fused_to_specific
@@ -3031,7 +3130,7 @@
self.analyse_default_values(env)
self.analyse_annotations(env)
if self.return_type_annotation:
- self.return_type_annotation = self.return_type_annotation.analyse_types(env)
+ self.return_type_annotation = self.analyse_annotation(env, self.return_type_annotation)
if not self.needs_assignment_synthesis(env) and self.decorators:
for decorator in self.decorators[::-1]:
@@ -3106,7 +3205,10 @@
arg_code_list.append(arg_decl_code(self.star_arg))
if self.starstar_arg:
arg_code_list.append(arg_decl_code(self.starstar_arg))
- arg_code = ', '.join(arg_code_list)
+ if arg_code_list:
+ arg_code = ', '.join(arg_code_list)
+ else:
+ arg_code = 'void' # No arguments
dc = self.return_type.declaration_code(self.entry.pyfunc_cname)
decls_code = code.globalstate['decls']
@@ -3344,8 +3446,8 @@
if docstr.is_unicode:
docstr = docstr.as_utf8_string()
- code.putln(
- 'static char %s[] = %s;' % (
+ if not (entry.is_special and entry.name in ('__getbuffer__', '__releasebuffer__')):
+ code.putln('static char %s[] = %s;' % (
entry.doc_cname,
docstr.as_c_string_literal()))
@@ -3542,10 +3644,6 @@
min_positional_args == max_positional_args
has_kw_only_args = bool(kw_only_args)
- if self.num_required_kw_args:
- code.globalstate.use_utility_code(
- UtilityCode.load_cached("RaiseKeywordRequired", "FunctionArguments.c"))
-
if self.starstar_arg or self.star_arg:
self.generate_stararg_init_code(max_positional_args, code)
@@ -3598,6 +3696,8 @@
if not arg.default:
pystring_cname = code.intern_identifier(arg.name)
# required keyword-only argument missing
+ code.globalstate.use_utility_code(
+ UtilityCode.load_cached("RaiseKeywordRequired", "FunctionArguments.c"))
code.put('__Pyx_RaiseKeywordRequired("%s", %s); ' % (
self.name,
pystring_cname))
@@ -3668,18 +3768,12 @@
entry = arg.entry
code.putln("%s = %s;" % (entry.cname, item))
else:
- func = arg.type.from_py_function
- if func:
+ if arg.type.from_py_function:
if arg.default:
# C-typed default arguments must be handled here
code.putln('if (%s) {' % item)
- rhs = "%s(%s)" % (func, item)
- if arg.type.is_enum:
- rhs = arg.type.cast_code(rhs)
- code.putln("%s = %s; %s" % (
- arg.entry.cname,
- rhs,
- code.error_goto_if(arg.type.error_condition(arg.entry.cname), arg.pos)))
+ code.putln(arg.type.from_py_call_code(
+ item, arg.entry.cname, arg.pos, code))
if arg.default:
code.putln('} else {')
code.putln("%s = %s;" % (
@@ -3780,11 +3874,11 @@
code.putln('switch (pos_args) {')
for i, arg in enumerate(all_args[:last_required_arg+1]):
if max_positional_args > 0 and i <= max_positional_args:
+ if i != 0:
+ code.putln('CYTHON_FALLTHROUGH;')
if self.star_arg and i == max_positional_args:
code.putln('default:')
else:
- if i != 0:
- code.putln('CYTHON_FALLTHROUGH;')
code.putln('case %2d:' % i)
pystring_cname = code.intern_identifier(arg.name)
if arg.default:
@@ -3793,12 +3887,12 @@
continue
code.putln('if (kw_args > 0) {')
# don't overwrite default argument
- code.putln('PyObject* value = PyDict_GetItem(%s, %s);' % (
+ code.putln('PyObject* value = __Pyx_PyDict_GetItemStr(%s, %s);' % (
Naming.kwds_cname, pystring_cname))
code.putln('if (value) { values[%d] = value; kw_args--; }' % i)
code.putln('}')
else:
- code.putln('if (likely((values[%d] = PyDict_GetItem(%s, %s)) != 0)) kw_args--;' % (
+ code.putln('if (likely((values[%d] = __Pyx_PyDict_GetItemStr(%s, %s)) != 0)) kw_args--;' % (
i, Naming.kwds_cname, pystring_cname))
if i < min_positional_args:
if i == 0:
@@ -3819,6 +3913,8 @@
code.putln('}')
elif arg.kw_only:
code.putln('else {')
+ code.globalstate.use_utility_code(
+ UtilityCode.load_cached("RaiseKeywordRequired", "FunctionArguments.c"))
code.put('__Pyx_RaiseKeywordRequired("%s", %s); ' % (
self.name, pystring_cname))
code.putln(code.error_goto(self.pos))
@@ -3882,7 +3978,7 @@
else:
code.putln('if (kw_args == 1) {')
code.putln('const Py_ssize_t index = %d;' % first_optional_arg)
- code.putln('PyObject* value = PyDict_GetItem(%s, *%s[index]);' % (
+ code.putln('PyObject* value = __Pyx_PyDict_GetItemStr(%s, *%s[index]);' % (
Naming.kwds_cname, Naming.pykwdlist_cname))
code.putln('if (value) { values[index] = value; kw_args--; }')
if len(optional_args) > 1:
@@ -3918,17 +4014,14 @@
def generate_arg_conversion_from_pyobject(self, arg, code):
new_type = arg.type
- func = new_type.from_py_function
# copied from CoerceFromPyTypeNode
- if func:
- lhs = arg.entry.cname
- rhs = "%s(%s)" % (func, arg.hdr_cname)
- if new_type.is_enum:
- rhs = PyrexTypes.typecast(new_type, PyrexTypes.c_long_type, rhs)
- code.putln("%s = %s; %s" % (
- lhs,
- rhs,
- code.error_goto_if(new_type.error_condition(arg.entry.cname), arg.pos)))
+ if new_type.from_py_function:
+ code.putln(new_type.from_py_call_code(
+ arg.hdr_cname,
+ arg.entry.cname,
+ arg.pos,
+ code,
+ ))
else:
error(arg.pos, "Cannot convert Python object argument to type '%s'" % new_type)
@@ -3969,6 +4062,9 @@
is_generator = True
is_coroutine = False
+ is_iterable_coroutine = False
+ is_asyncgen = False
+ gen_type_name = 'Generator'
needs_closure = True
child_attrs = DefNode.child_attrs + ["gbody"]
@@ -3991,9 +4087,10 @@
code.putln('{')
code.putln('__pyx_CoroutineObject *gen = __Pyx_%s_New('
- '(__pyx_coroutine_body_t) %s, (PyObject *) %s, %s, %s, %s); %s' % (
- 'Coroutine' if self.is_coroutine else 'Generator',
- body_cname, Naming.cur_scope_cname, name, qualname, module_name,
+ '(__pyx_coroutine_body_t) %s, %s, (PyObject *) %s, %s, %s, %s); %s' % (
+ self.gen_type_name,
+ body_cname, self.code_object.calculate_result_code(code) if self.code_object else 'NULL',
+ Naming.cur_scope_cname, name, qualname, module_name,
code.error_goto_if_null('gen', self.pos)))
code.put_decref(Naming.cur_scope_cname, py_object_type)
if self.requires_classobj:
@@ -4007,30 +4104,40 @@
code.putln('}')
def generate_function_definitions(self, env, code):
- env.use_utility_code(UtilityCode.load_cached(
- 'Coroutine' if self.is_coroutine else 'Generator', "Coroutine.c"))
-
+ env.use_utility_code(UtilityCode.load_cached(self.gen_type_name, "Coroutine.c"))
self.gbody.generate_function_header(code, proto=True)
super(GeneratorDefNode, self).generate_function_definitions(env, code)
self.gbody.generate_function_definitions(env, code)
class AsyncDefNode(GeneratorDefNode):
+ gen_type_name = 'Coroutine'
is_coroutine = True
+class IterableAsyncDefNode(AsyncDefNode):
+ gen_type_name = 'IterableCoroutine'
+ is_iterable_coroutine = True
+
+
+class AsyncGenNode(AsyncDefNode):
+ gen_type_name = 'AsyncGen'
+ is_asyncgen = True
+
+
class GeneratorBodyDefNode(DefNode):
# Main code body of a generator implemented as a DefNode.
#
is_generator_body = True
is_inlined = False
+ is_async_gen_body = False
inlined_comprehension_type = None # container type for inlined comprehensions
- def __init__(self, pos=None, name=None, body=None):
+ def __init__(self, pos=None, name=None, body=None, is_async_gen_body=False):
super(GeneratorBodyDefNode, self).__init__(
- pos=pos, body=body, name=name, doc=None,
- args=[], star_arg=None, starstar_arg=None)
+ pos=pos, body=body, name=name, is_async_gen_body=is_async_gen_body,
+ doc=None, args=[], star_arg=None, starstar_arg=None)
def declare_generator_body(self, env):
prefix = env.next_id(env.scope_prefix)
@@ -4047,9 +4154,10 @@
self.declare_generator_body(env)
def generate_function_header(self, code, proto=False):
- header = "static PyObject *%s(__pyx_CoroutineObject *%s, PyObject *%s)" % (
+ header = "static PyObject *%s(__pyx_CoroutineObject *%s, CYTHON_UNUSED PyThreadState *%s, PyObject *%s)" % (
self.entry.func_cname,
Naming.generator_cname,
+ Naming.local_tstate_cname,
Naming.sent_value_cname)
if proto:
code.putln('%s; /* proto */' % header)
@@ -4077,11 +4185,14 @@
code.putln("PyObject *%s = NULL;" % Naming.retval_cname)
tempvardecl_code = code.insertion_point()
code.put_declare_refcount_context()
- code.put_setup_refcount_context(self.entry.name)
+ code.put_setup_refcount_context(self.entry.name or self.entry.qualified_name)
profile = code.globalstate.directives['profile']
linetrace = code.globalstate.directives['linetrace']
if profile or linetrace:
tempvardecl_code.put_trace_declarations()
+ code.funcstate.can_trace = True
+ code_object = self.code_object.calculate_result_code(code) if self.code_object else None
+ code.put_trace_frame_init(code_object)
# ----- Resume switch point.
code.funcstate.init_closure_temps(lenv.scope_class.type.scope)
@@ -4112,7 +4223,7 @@
# ----- Function body
self.generate_function_body(env, code)
# ----- Closure initialization
- if lenv.scope_class.type.scope.entries:
+ if lenv.scope_class.type.scope.var_entries:
closure_init_code.putln('%s = %s;' % (
lenv.scope_class.type.declaration_code(Naming.cur_scope_cname),
lenv.scope_class.type.cast_code('%s->closure' %
@@ -4120,6 +4231,9 @@
# FIXME: this silences a potential "unused" warning => try to avoid unused closures in more cases
code.putln("CYTHON_MAYBE_UNUSED_VAR(%s);" % Naming.cur_scope_cname)
+ if profile or linetrace:
+ code.funcstate.can_trace = False
+
code.mark_pos(self.pos)
code.putln("")
code.putln("/* function exit code */")
@@ -4127,9 +4241,13 @@
# on normal generator termination, we do not take the exception propagation
# path: no traceback info is required and not creating it is much faster
if not self.is_inlined and not self.body.is_terminator:
- code.putln('PyErr_SetNone(PyExc_StopIteration);')
+ if self.is_async_gen_body:
+ code.globalstate.use_utility_code(
+ UtilityCode.load_cached("StopAsyncIteration", "Coroutine.c"))
+ code.putln('PyErr_SetNone(%s);' % (
+ '__Pyx_PyExc_StopAsyncIteration' if self.is_async_gen_body else 'PyExc_StopIteration'))
# ----- Error cleanup
- if code.error_label in code.labels_used:
+ if code.label_used(code.error_label):
if not self.body.is_terminator:
code.put_goto(code.return_label)
code.put_label(code.error_label)
@@ -4138,8 +4256,7 @@
if Future.generator_stop in env.global_scope().context.future_directives:
# PEP 479: turn accidental StopIteration exceptions into a RuntimeError
code.globalstate.use_utility_code(UtilityCode.load_cached("pep479", "Coroutine.c"))
- code.putln("if (unlikely(PyErr_ExceptionMatches(PyExc_StopIteration))) "
- "__Pyx_Generator_Replace_StopIteration();")
+ code.putln("__Pyx_Generator_Replace_StopIteration(%d);" % bool(self.is_async_gen_body))
for cname, type in code.funcstate.all_managed_temps():
code.put_xdecref(cname, type)
code.put_add_traceback(self.entry.qualified_name)
@@ -4150,6 +4267,10 @@
code.put_xgiveref(Naming.retval_cname)
else:
code.put_xdecref_clear(Naming.retval_cname, py_object_type)
+ # For Py3.7, clearing is already done below.
+ code.putln("#if !CYTHON_USE_EXC_INFO_STACK")
+ code.putln("__Pyx_Coroutine_ResetAndClearException(%s);" % Naming.generator_cname)
+ code.putln("#endif")
code.putln('%s->resume_label = -1;' % Naming.generator_cname)
# clean up as early as possible to help breaking any reference cycles
code.putln('__Pyx_Coroutine_clear((PyObject*)%s);' % Naming.generator_cname)
@@ -4186,7 +4307,7 @@
class OverrideCheckNode(StatNode):
# A Node for dispatching to the def method if it
- # is overriden.
+ # is overridden.
#
# py_func
#
@@ -4232,7 +4353,25 @@
if self.py_func.is_module_scope:
code.putln("else {")
else:
- code.putln("else if (unlikely(Py_TYPE(%s)->tp_dictoffset != 0)) {" % self_arg)
+ code.putln("else if (unlikely((Py_TYPE(%s)->tp_dictoffset != 0)"
+ " || (Py_TYPE(%s)->tp_flags & (Py_TPFLAGS_IS_ABSTRACT | Py_TPFLAGS_HEAPTYPE)))) {" % (
+ self_arg, self_arg))
+
+ code.putln("#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_PYTYPE_LOOKUP && CYTHON_USE_TYPE_SLOTS")
+ code.globalstate.use_utility_code(
+ UtilityCode.load_cached("PyDictVersioning", "ObjectHandling.c"))
+ # TODO: remove the object dict version check by 'inlining' the getattr implementation for methods.
+ # This would allow checking the dict versions around _PyType_Lookup() if it returns a descriptor,
+ # and would (tada!) make this check a pure type based thing instead of supporting only a single
+ # instance at a time.
+ code.putln("static PY_UINT64_T %s = __PYX_DICT_VERSION_INIT, %s = __PYX_DICT_VERSION_INIT;" % (
+ Naming.tp_dict_version_temp, Naming.obj_dict_version_temp))
+ code.putln("if (unlikely(!__Pyx_object_dict_version_matches(%s, %s, %s))) {" % (
+ self_arg, Naming.tp_dict_version_temp, Naming.obj_dict_version_temp))
+ code.putln("PY_UINT64_T %s = __Pyx_get_tp_dict_version(%s);" % (
+ Naming.type_dict_guard_temp, self_arg))
+ code.putln("#endif")
+
func_node_temp = code.funcstate.allocate_temp(py_object_type, manage_ref=True)
self.func_node.set_cname(func_node_temp)
# need to get attribute manually--scope would return cdef method
@@ -4242,14 +4381,41 @@
code.putln("%s = __Pyx_PyObject_GetAttrStr(%s, %s); %s" % (
func_node_temp, self_arg, interned_attr_cname, err))
code.put_gotref(func_node_temp)
+
is_builtin_function_or_method = "PyCFunction_Check(%s)" % func_node_temp
- is_overridden = "(PyCFunction_GET_FUNCTION(%s) != (PyCFunction)%s)" % (
+ is_overridden = "(PyCFunction_GET_FUNCTION(%s) != (PyCFunction)(void*)%s)" % (
func_node_temp, self.py_func.entry.func_cname)
code.putln("if (!%s || %s) {" % (is_builtin_function_or_method, is_overridden))
self.body.generate_execution_code(code)
code.putln("}")
+
+ # NOTE: it's not 100% sure that we catch the exact versions here that were used for the lookup,
+ # but it is very unlikely that the versions change during lookup, and the type dict safe guard
+ # should increase the chance of detecting such a case.
+ code.putln("#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_PYTYPE_LOOKUP && CYTHON_USE_TYPE_SLOTS")
+ code.putln("%s = __Pyx_get_tp_dict_version(%s);" % (
+ Naming.tp_dict_version_temp, self_arg))
+ code.putln("%s = __Pyx_get_object_dict_version(%s);" % (
+ Naming.obj_dict_version_temp, self_arg))
+ # Safety check that the type dict didn't change during the lookup. Since CPython looks up the
+ # attribute (descriptor) first in the type dict and then in the instance dict or through the
+ # descriptor, the only really far-away lookup when we get here is one in the type dict. So we
+ # double check the type dict version before and afterwards to guard against later changes of
+ # the type dict during the lookup process.
+ code.putln("if (unlikely(%s != %s)) {" % (
+ Naming.type_dict_guard_temp, Naming.tp_dict_version_temp))
+ code.putln("%s = %s = __PYX_DICT_VERSION_INIT;" % (
+ Naming.tp_dict_version_temp, Naming.obj_dict_version_temp))
+ code.putln("}")
+ code.putln("#endif")
+
code.put_decref_clear(func_node_temp, PyrexTypes.py_object_type)
code.funcstate.release_temp(func_node_temp)
+
+ code.putln("#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_PYTYPE_LOOKUP && CYTHON_USE_TYPE_SLOTS")
+ code.putln("}")
+ code.putln("#endif")
+
code.putln("}")
@@ -4362,36 +4528,13 @@
if self.is_py3_style_class:
error(self.classobj.pos, "Python3 style class could not be represented as C class")
return
- bases = self.classobj.bases.args
- if len(bases) == 0:
- base_class_name = None
- base_class_module = None
- elif len(bases) == 1:
- base = bases[0]
- path = []
- from .ExprNodes import AttributeNode, NameNode
- while isinstance(base, AttributeNode):
- path.insert(0, base.attribute)
- base = base.obj
- if isinstance(base, NameNode):
- path.insert(0, base.name)
- base_class_name = path[-1]
- if len(path) > 1:
- base_class_module = u'.'.join(path[:-1])
- else:
- base_class_module = None
- else:
- error(self.classobj.bases.args.pos, "Invalid base class")
- else:
- error(self.classobj.bases.args.pos, "C class may only have one base class")
- return None
+ from . import ExprNodes
return CClassDefNode(self.pos,
visibility='private',
module_name=None,
class_name=self.name,
- base_class_module=base_class_module,
- base_class_name=base_class_name,
+ bases=self.classobj.bases or ExprNodes.TupleNode(self.pos, args=[]),
decorators=self.decorators,
body=self.body,
in_pxd=False,
@@ -4474,6 +4617,7 @@
self.bases.free_temps(code)
code.pyclass_stack.pop()
+
class CClassDefNode(ClassDefNode):
# An extension type definition.
#
@@ -4483,10 +4627,10 @@
# module_name string or None For import of extern type objects
# class_name string Unqualified name of class
# as_name string or None Name to declare as in this scope
- # base_class_module string or None Module containing the base class
- # base_class_name string or None Name of the base class
+ # bases TupleNode Base class(es)
# objstruct_name string or None Specified C name of object struct
# typeobj_name string or None Specified C name of type object
+ # check_size 'warn', 'error', 'ignore' What to do if tp_basicsize does not match
# in_pxd boolean Is in a .pxd file
# decorators [DecoratorNode] list of decorators or None
# doc string or None
@@ -4503,6 +4647,7 @@
api = False
objstruct_name = None
typeobj_name = None
+ check_size = None
decorators = None
shadow = False
@@ -4538,6 +4683,7 @@
typeobj_cname=self.typeobj_name,
visibility=self.visibility,
typedef_flag=self.typedef_flag,
+ check_size = self.check_size,
api=self.api,
buffer_defaults=self.buffer_defaults(env),
shadow=self.shadow)
@@ -4564,44 +4710,34 @@
self.module.has_extern_class = 1
env.add_imported_module(self.module)
- if self.base_class_name:
- if self.base_class_module:
- base_class_scope = env.find_imported_module(self.base_class_module.split('.'), self.pos)
- if not base_class_scope:
- error(self.pos, "'%s' is not a cimported module" % self.base_class_module)
- return
- else:
- base_class_scope = env
- if self.base_class_name == 'object':
- # extension classes are special and don't need to inherit from object
- if base_class_scope is None or base_class_scope.lookup('object') is None:
- self.base_class_name = None
- self.base_class_module = None
- base_class_scope = None
- if base_class_scope:
- base_class_entry = base_class_scope.find(self.base_class_name, self.pos)
- if base_class_entry:
- if not base_class_entry.is_type:
- error(self.pos, "'%s' is not a type name" % self.base_class_name)
- elif not base_class_entry.type.is_extension_type and \
- not (base_class_entry.type.is_builtin_type and
- base_class_entry.type.objstruct_cname):
- error(self.pos, "'%s' is not an extension type" % self.base_class_name)
- elif not base_class_entry.type.is_complete():
- error(self.pos, "Base class '%s' of type '%s' is incomplete" % (
- self.base_class_name, self.class_name))
- elif base_class_entry.type.scope and base_class_entry.type.scope.directives and \
- base_class_entry.type.is_final_type:
- error(self.pos, "Base class '%s' of type '%s' is final" % (
- self.base_class_name, self.class_name))
- elif base_class_entry.type.is_builtin_type and \
- base_class_entry.type.name in ('tuple', 'str', 'bytes'):
- error(self.pos, "inheritance from PyVarObject types like '%s' is not currently supported"
- % base_class_entry.type.name)
- else:
- self.base_type = base_class_entry.type
- if env.directives.get('freelist', 0) > 0:
- warning(self.pos, "freelists cannot be used on subtypes, only the base class can manage them", 1)
+ if self.bases.args:
+ base = self.bases.args[0]
+ base_type = base.analyse_as_type(env)
+ if base_type in (PyrexTypes.c_int_type, PyrexTypes.c_long_type, PyrexTypes.c_float_type):
+ # Use the Python rather than C variant of these types.
+ base_type = env.lookup(base_type.sign_and_name()).type
+ if base_type is None:
+ error(base.pos, "First base of '%s' is not an extension type" % self.class_name)
+ elif base_type == PyrexTypes.py_object_type:
+ base_class_scope = None
+ elif not base_type.is_extension_type and \
+ not (base_type.is_builtin_type and base_type.objstruct_cname):
+ error(base.pos, "'%s' is not an extension type" % base_type)
+ elif not base_type.is_complete():
+ error(base.pos, "Base class '%s' of type '%s' is incomplete" % (
+ base_type.name, self.class_name))
+ elif base_type.scope and base_type.scope.directives and \
+ base_type.is_final_type:
+ error(base.pos, "Base class '%s' of type '%s' is final" % (
+ base_type, self.class_name))
+ elif base_type.is_builtin_type and \
+ base_type.name in ('tuple', 'str', 'bytes'):
+ error(base.pos, "inheritance from PyVarObject types like '%s' is not currently supported"
+ % base_type.name)
+ else:
+ self.base_type = base_type
+ if env.directives.get('freelist', 0) > 0 and base_type != PyrexTypes.py_object_type:
+ warning(self.pos, "freelists cannot be used on subtypes, only the base class can manage them", 1)
has_body = self.body is not None
if has_body and self.base_type and not self.base_type.scope:
@@ -4633,6 +4769,7 @@
base_type=self.base_type,
objstruct_cname=self.objstruct_name,
typeobj_cname=self.typeobj_name,
+ check_size=self.check_size,
visibility=self.visibility,
typedef_flag=self.typedef_flag,
api=self.api,
@@ -4661,6 +4798,28 @@
else:
scope.implemented = 1
+ if len(self.bases.args) > 1:
+ if not has_body or self.in_pxd:
+ error(self.bases.args[1].pos, "Only declare first base in declaration.")
+ # At runtime, we check that the other bases are heap types
+ # and that a __dict__ is added if required.
+ for other_base in self.bases.args[1:]:
+ if other_base.analyse_as_type(env):
+ error(other_base.pos, "Only one extension type base class allowed.")
+ self.entry.type.early_init = 0
+ from . import ExprNodes
+ self.type_init_args = ExprNodes.TupleNode(
+ self.pos,
+ args=[ExprNodes.IdentifierStringNode(self.pos, value=self.class_name),
+ self.bases,
+ ExprNodes.DictNode(self.pos, key_value_pairs=[])])
+ elif self.base_type:
+ self.entry.type.early_init = self.base_type.is_external or self.base_type.early_init
+ self.type_init_args = None
+ else:
+ self.entry.type.early_init = 1
+ self.type_init_args = None
+
env.allocate_vtable_names(self.entry)
for thunk in self.entry.type.defered_declarations:
@@ -4670,6 +4829,8 @@
if self.body:
scope = self.entry.type.scope
self.body = self.body.analyse_expressions(scope)
+ if self.type_init_args:
+ self.type_init_args.analyse_expressions(env)
return self
def generate_function_definitions(self, env, code):
@@ -4683,8 +4844,175 @@
code.mark_pos(self.pos)
if self.body:
self.body.generate_execution_code(code)
+ if not self.entry.type.early_init:
+ if self.type_init_args:
+ self.type_init_args.generate_evaluation_code(code)
+ bases = "PyTuple_GET_ITEM(%s, 1)" % self.type_init_args.result()
+ first_base = "((PyTypeObject*)PyTuple_GET_ITEM(%s, 0))" % bases
+ # Let Python do the base types compatibility checking.
+ trial_type = code.funcstate.allocate_temp(PyrexTypes.py_object_type, True)
+ code.putln("%s = PyType_Type.tp_new(&PyType_Type, %s, NULL);" % (
+ trial_type, self.type_init_args.result()))
+ code.putln(code.error_goto_if_null(trial_type, self.pos))
+ code.put_gotref(trial_type)
+ code.putln("if (((PyTypeObject*) %s)->tp_base != %s) {" % (
+ trial_type, first_base))
+ code.putln("PyErr_Format(PyExc_TypeError, \"best base '%s' must be equal to first base '%s'\",")
+ code.putln(" ((PyTypeObject*) %s)->tp_base->tp_name, %s->tp_name);" % (
+ trial_type, first_base))
+ code.putln(code.error_goto(self.pos))
+ code.putln("}")
+ code.funcstate.release_temp(trial_type)
+ code.put_incref(bases, PyrexTypes.py_object_type)
+ code.put_giveref(bases)
+ code.putln("%s.tp_bases = %s;" % (self.entry.type.typeobj_cname, bases))
+ code.put_decref_clear(trial_type, PyrexTypes.py_object_type)
+ self.type_init_args.generate_disposal_code(code)
+ self.type_init_args.free_temps(code)
+
+ self.generate_type_ready_code(self.entry, code, True)
+
+ # Also called from ModuleNode for early init types.
+ @staticmethod
+ def generate_type_ready_code(entry, code, heap_type_bases=False):
+ # Generate a call to PyType_Ready for an extension
+ # type defined in this module.
+ type = entry.type
+ typeobj_cname = type.typeobj_cname
+ scope = type.scope
+ if not scope: # could be None if there was an error
+ return
+ if entry.visibility != 'extern':
+ for slot in TypeSlots.slot_table:
+ slot.generate_dynamic_init_code(scope, code)
+ if heap_type_bases:
+ code.globalstate.use_utility_code(
+ UtilityCode.load_cached('PyType_Ready', 'ExtensionTypes.c'))
+ readyfunc = "__Pyx_PyType_Ready"
+ else:
+ readyfunc = "PyType_Ready"
+ code.putln(
+ "if (%s(&%s) < 0) %s" % (
+ readyfunc,
+ typeobj_cname,
+ code.error_goto(entry.pos)))
+ # Don't inherit tp_print from builtin types, restoring the
+ # behavior of using tp_repr or tp_str instead.
+ # ("tp_print" was renamed to "tp_vectorcall_offset" in Py3.8b1)
+ code.putln("#if PY_VERSION_HEX < 0x030800B1")
+ code.putln("%s.tp_print = 0;" % typeobj_cname)
+ code.putln("#endif")
+
+ # Use specialised attribute lookup for types with generic lookup but no instance dict.
+ getattr_slot_func = TypeSlots.get_slot_code_by_name(scope, 'tp_getattro')
+ dictoffset_slot_func = TypeSlots.get_slot_code_by_name(scope, 'tp_dictoffset')
+ if getattr_slot_func == '0' and dictoffset_slot_func == '0':
+ if type.is_final_type:
+ py_cfunc = "__Pyx_PyObject_GenericGetAttrNoDict" # grepable
+ utility_func = "PyObject_GenericGetAttrNoDict"
+ else:
+ py_cfunc = "__Pyx_PyObject_GenericGetAttr"
+ utility_func = "PyObject_GenericGetAttr"
+ code.globalstate.use_utility_code(UtilityCode.load_cached(utility_func, "ObjectHandling.c"))
+
+ code.putln("if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) &&"
+ " likely(!%s.tp_dictoffset && %s.tp_getattro == PyObject_GenericGetAttr)) {" % (
+ typeobj_cname, typeobj_cname))
+ code.putln("%s.tp_getattro = %s;" % (
+ typeobj_cname, py_cfunc))
+ code.putln("}")
+
+ # Fix special method docstrings. This is a bit of a hack, but
+ # unless we let PyType_Ready create the slot wrappers we have
+ # a significant performance hit. (See trac #561.)
+ for func in entry.type.scope.pyfunc_entries:
+ is_buffer = func.name in ('__getbuffer__', '__releasebuffer__')
+ if (func.is_special and Options.docstrings and
+ func.wrapperbase_cname and not is_buffer):
+ slot = TypeSlots.method_name_to_slot.get(func.name)
+ preprocessor_guard = slot.preprocessor_guard_code() if slot else None
+ if preprocessor_guard:
+ code.putln(preprocessor_guard)
+ code.putln('#if CYTHON_COMPILING_IN_CPYTHON')
+ code.putln("{")
+ code.putln(
+ 'PyObject *wrapper = PyObject_GetAttrString((PyObject *)&%s, "%s"); %s' % (
+ typeobj_cname,
+ func.name,
+ code.error_goto_if_null('wrapper', entry.pos)))
+ code.putln(
+ "if (Py_TYPE(wrapper) == &PyWrapperDescr_Type) {")
+ code.putln(
+ "%s = *((PyWrapperDescrObject *)wrapper)->d_base;" % (
+ func.wrapperbase_cname))
+ code.putln(
+ "%s.doc = %s;" % (func.wrapperbase_cname, func.doc_cname))
+ code.putln(
+ "((PyWrapperDescrObject *)wrapper)->d_base = &%s;" % (
+ func.wrapperbase_cname))
+ code.putln("}")
+ code.putln("}")
+ code.putln('#endif')
+ if preprocessor_guard:
+ code.putln('#endif')
+ if type.vtable_cname:
+ code.globalstate.use_utility_code(
+ UtilityCode.load_cached('SetVTable', 'ImportExport.c'))
+ code.putln(
+ "if (__Pyx_SetVtable(%s.tp_dict, %s) < 0) %s" % (
+ typeobj_cname,
+ type.vtabptr_cname,
+ code.error_goto(entry.pos)))
+ if heap_type_bases:
+ code.globalstate.use_utility_code(
+ UtilityCode.load_cached('MergeVTables', 'ImportExport.c'))
+ code.putln("if (__Pyx_MergeVtables(&%s) < 0) %s" % (
+ typeobj_cname,
+ code.error_goto(entry.pos)))
+ if not type.scope.is_internal and not type.scope.directives.get('internal'):
+ # scope.is_internal is set for types defined by
+ # Cython (such as closures), the 'internal'
+ # directive is set by users
+ code.putln(
+ 'if (PyObject_SetAttr(%s, %s, (PyObject *)&%s) < 0) %s' % (
+ Naming.module_cname,
+ code.intern_identifier(scope.class_name),
+ typeobj_cname,
+ code.error_goto(entry.pos)))
+ weakref_entry = scope.lookup_here("__weakref__") if not scope.is_closure_class_scope else None
+ if weakref_entry:
+ if weakref_entry.type is py_object_type:
+ tp_weaklistoffset = "%s.tp_weaklistoffset" % typeobj_cname
+ if type.typedef_flag:
+ objstruct = type.objstruct_cname
+ else:
+ objstruct = "struct %s" % type.objstruct_cname
+ code.putln("if (%s == 0) %s = offsetof(%s, %s);" % (
+ tp_weaklistoffset,
+ tp_weaklistoffset,
+ objstruct,
+ weakref_entry.cname))
+ else:
+ error(weakref_entry.pos, "__weakref__ slot must be of type 'object'")
+ if scope.lookup_here("__reduce_cython__") if not scope.is_closure_class_scope else None:
+ # Unfortunately, we cannot reliably detect whether a
+ # superclass defined __reduce__ at compile time, so we must
+ # do so at runtime.
+ code.globalstate.use_utility_code(
+ UtilityCode.load_cached('SetupReduce', 'ExtensionTypes.c'))
+ code.putln('if (__Pyx_setup_reduce((PyObject*)&%s) < 0) %s' % (
+ typeobj_cname,
+ code.error_goto(entry.pos)))
+ # Generate code to initialise the typeptr of an extension
+ # type defined in this module to point to its type object.
+ if type.typeobj_cname:
+ code.putln(
+ "%s = &%s;" % (
+ type.typeptr_cname, type.typeobj_cname))
def annotate(self, code):
+ if self.type_init_args:
+ self.type_init_args.annotate(code)
if self.body:
self.body.annotate(code)
@@ -4763,12 +5091,13 @@
def analyse_declarations(self, env):
from . import ExprNodes
- if isinstance(self.expr, ExprNodes.GeneralCallNode):
- func = self.expr.function.as_cython_attribute()
+ expr = self.expr
+ if isinstance(expr, ExprNodes.GeneralCallNode):
+ func = expr.function.as_cython_attribute()
if func == u'declare':
- args, kwds = self.expr.explicit_args_kwds()
+ args, kwds = expr.explicit_args_kwds()
if len(args):
- error(self.expr.pos, "Variable names must be specified.")
+ error(expr.pos, "Variable names must be specified.")
for var, type_node in kwds.key_value_pairs:
type = type_node.analyse_as_type(env)
if type is None:
@@ -4776,10 +5105,20 @@
else:
env.declare_var(var.value, type, var.pos, is_cdef=True)
self.__class__ = PassStatNode
+ elif getattr(expr, 'annotation', None) is not None:
+ if expr.is_name:
+ # non-code variable annotation, e.g. "name: type"
+ expr.declare_from_annotation(env)
+ self.__class__ = PassStatNode
+ elif expr.is_attribute or expr.is_subscript:
+ # unused expression with annotation, e.g. "a[0]: type" or "a.xyz : type"
+ self.__class__ = PassStatNode
def analyse_expressions(self, env):
self.expr.result_is_used = False # hint that .result() may safely be left empty
self.expr = self.expr.analyse_expressions(env)
+ # Repeat in case of node replacement.
+ self.expr.result_is_used = False # hint that .result() may safely be left empty
return self
def nogil_check(self, env):
@@ -4790,9 +5129,13 @@
def generate_execution_code(self, code):
code.mark_pos(self.pos)
+ self.expr.result_is_used = False # hint that .result() may safely be left empty
self.expr.generate_evaluation_code(code)
if not self.expr.is_temp and self.expr.result():
- code.putln("%s;" % self.expr.result())
+ result = self.expr.result()
+ if not self.expr.type.is_void:
+ result = "(void)(%s)" % result
+ code.putln("%s;" % result)
self.expr.generate_disposal_code(code)
self.expr.free_temps(code)
@@ -5576,10 +5919,12 @@
# value ExprNode or None
# return_type PyrexType
# in_generator return inside of generator => raise StopIteration
+ # in_async_gen return inside of async generator
child_attrs = ["value"]
is_terminator = True
in_generator = False
+ in_async_gen = False
# Whether we are in a parallel section
in_parallel = False
@@ -5591,6 +5936,8 @@
error(self.pos, "Return not inside a function body")
return self
if self.value:
+ if self.in_async_gen:
+ error(self.pos, "Return with value in async generator")
self.value = self.value.analyse_types(env)
if return_type.is_void or return_type.is_returncode:
error(self.value.pos, "Return with value in void function")
@@ -5614,19 +5961,23 @@
if not self.return_type:
# error reported earlier
return
+
+ value = self.value
if self.return_type.is_pyobject:
- code.put_xdecref(Naming.retval_cname,
- self.return_type)
+ code.put_xdecref(Naming.retval_cname, self.return_type)
+ if value and value.is_none:
+ # Use specialised default handling for "return None".
+ value = None
- if self.value:
- self.value.generate_evaluation_code(code)
+ if value:
+ value.generate_evaluation_code(code)
if self.return_type.is_memoryviewslice:
from . import MemoryView
MemoryView.put_acquire_memoryviewslice(
lhs_cname=Naming.retval_cname,
lhs_type=self.return_type,
- lhs_pos=self.value.pos,
- rhs=self.value,
+ lhs_pos=value.pos,
+ rhs=value,
code=code,
have_gil=self.in_nogil_context)
elif self.in_generator:
@@ -5635,18 +5986,22 @@
UtilityCode.load_cached("ReturnWithStopIteration", "Coroutine.c"))
code.putln("%s = NULL; __Pyx_ReturnWithStopIteration(%s);" % (
Naming.retval_cname,
- self.value.py_result()))
- self.value.generate_disposal_code(code)
+ value.py_result()))
+ value.generate_disposal_code(code)
else:
- self.value.make_owned_reference(code)
+ value.make_owned_reference(code)
code.putln("%s = %s;" % (
Naming.retval_cname,
- self.value.result_as(self.return_type)))
- self.value.generate_post_assignment_code(code)
- self.value.free_temps(code)
+ value.result_as(self.return_type)))
+ value.generate_post_assignment_code(code)
+ value.free_temps(code)
else:
if self.return_type.is_pyobject:
if self.in_generator:
+ if self.in_async_gen:
+ code.globalstate.use_utility_code(
+ UtilityCode.load_cached("StopAsyncIteration", "Coroutine.c"))
+ code.put("PyErr_SetNone(__Pyx_PyExc_StopAsyncIteration); ")
code.putln("%s = NULL;" % Naming.retval_cname)
else:
code.put_init_to_py_none(Naming.retval_cname, self.return_type)
@@ -5891,9 +6246,13 @@
code.mark_pos(self.pos)
end_label = code.new_label()
last = len(self.if_clauses)
- if not self.else_clause:
+ if self.else_clause:
+ # If the 'else' clause is 'unlikely', then set the preceding 'if' clause to 'likely' to reflect that.
+ self._set_branch_hint(self.if_clauses[-1], self.else_clause, inverse=True)
+ else:
last -= 1 # avoid redundant goto at end of last if-clause
for i, if_clause in enumerate(self.if_clauses):
+ self._set_branch_hint(if_clause, if_clause.body)
if_clause.generate_execution_code(code, end_label, is_last=i == last)
if self.else_clause:
code.mark_pos(self.else_clause.pos)
@@ -5902,6 +6261,21 @@
code.putln("}")
code.put_label(end_label)
+ def _set_branch_hint(self, clause, statements_node, inverse=False):
+ if not statements_node.is_terminator:
+ return
+ if not isinstance(statements_node, StatListNode) or not statements_node.stats:
+ return
+ # Anything that unconditionally raises exceptions should be considered unlikely.
+ if isinstance(statements_node.stats[-1], (RaiseStatNode, ReraiseStatNode)):
+ if len(statements_node.stats) > 1:
+ # Allow simple statements before the 'raise', but no conditions, loops, etc.
+ non_branch_nodes = (ExprStatNode, AssignmentNode, DelStatNode, GlobalNode, NonlocalNode)
+ for node in statements_node.stats[:-1]:
+ if not isinstance(node, non_branch_nodes):
+ return
+ clause.branch_hint = 'likely' if inverse else 'unlikely'
+
def generate_function_definitions(self, env, code):
for clause in self.if_clauses:
clause.generate_function_definitions(env, code)
@@ -5922,6 +6296,7 @@
# body StatNode
child_attrs = ["condition", "body"]
+ branch_hint = None
def analyse_declarations(self, env):
self.body.analyse_declarations(env)
@@ -5934,7 +6309,10 @@
def generate_execution_code(self, code, end_label, is_last):
self.condition.generate_evaluation_code(code)
code.mark_pos(self.pos)
- code.putln("if (%s) {" % self.condition.result())
+ condition = self.condition.result()
+ if self.branch_hint:
+ condition = '%s(%s)' % (self.branch_hint, condition)
+ code.putln("if (%s) {" % condition)
self.condition.generate_disposal_code(code)
self.condition.free_temps(code)
self.body.generate_execution_code(code)
@@ -5960,11 +6338,19 @@
child_attrs = ['conditions', 'body']
- def generate_execution_code(self, code):
+ def generate_condition_evaluation_code(self, code):
for cond in self.conditions:
- code.mark_pos(cond.pos)
cond.generate_evaluation_code(code)
+
+ def generate_execution_code(self, code):
+ num_conditions = len(self.conditions)
+ line_tracing_enabled = code.globalstate.directives['linetrace']
+ for i, cond in enumerate(self.conditions, 1):
code.putln("case %s:" % cond.result())
+ code.mark_pos(cond.pos) # Tracing code must appear *after* the 'case' statement.
+ if line_tracing_enabled and i < num_conditions:
+ # Allow fall-through after the line tracing code.
+ code.putln('CYTHON_FALLTHROUGH;')
self.body.generate_execution_code(code)
code.mark_pos(self.pos, trace=False)
code.putln("break;")
@@ -5991,6 +6377,10 @@
def generate_execution_code(self, code):
self.test.generate_evaluation_code(code)
+ # Make sure all conditions are evaluated before going into the switch() statement.
+ # This is required in order to prevent any execution code from leaking into the space between the cases.
+ for case in self.cases:
+ case.generate_condition_evaluation_code(code)
code.mark_pos(self.pos)
code.putln("switch (%s) {" % self.test.result())
for case in self.cases:
@@ -6177,6 +6567,66 @@
var.release(code)
+class SetIterationNextNode(Node):
+ # Helper node for calling _PySet_NextEntry() inside of a WhileStatNode
+ # and checking the set size for changes. Created in Optimize.py.
+ child_attrs = ['set_obj', 'expected_size', 'pos_index_var',
+ 'coerced_value_var', 'value_target', 'is_set_flag']
+
+ coerced_value_var = value_ref = None
+
+ def __init__(self, set_obj, expected_size, pos_index_var, value_target, is_set_flag):
+ Node.__init__(
+ self, set_obj.pos,
+ set_obj=set_obj,
+ expected_size=expected_size,
+ pos_index_var=pos_index_var,
+ value_target=value_target,
+ is_set_flag=is_set_flag,
+ is_temp=True,
+ type=PyrexTypes.c_bint_type)
+
+ def analyse_expressions(self, env):
+ from . import ExprNodes
+ self.set_obj = self.set_obj.analyse_types(env)
+ self.expected_size = self.expected_size.analyse_types(env)
+ self.pos_index_var = self.pos_index_var.analyse_types(env)
+ self.value_target = self.value_target.analyse_target_types(env)
+ self.value_ref = ExprNodes.TempNode(self.value_target.pos, type=PyrexTypes.py_object_type)
+ self.coerced_value_var = self.value_ref.coerce_to(self.value_target.type, env)
+ self.is_set_flag = self.is_set_flag.analyse_types(env)
+ return self
+
+ def generate_function_definitions(self, env, code):
+ self.set_obj.generate_function_definitions(env, code)
+
+ def generate_execution_code(self, code):
+ code.globalstate.use_utility_code(UtilityCode.load_cached("set_iter", "Optimize.c"))
+ self.set_obj.generate_evaluation_code(code)
+
+ value_ref = self.value_ref
+ value_ref.allocate(code)
+
+ result_temp = code.funcstate.allocate_temp(PyrexTypes.c_int_type, False)
+ code.putln("%s = __Pyx_set_iter_next(%s, %s, &%s, &%s, %s);" % (
+ result_temp,
+ self.set_obj.py_result(),
+ self.expected_size.result(),
+ self.pos_index_var.result(),
+ value_ref.result(),
+ self.is_set_flag.result()
+ ))
+ code.putln("if (unlikely(%s == 0)) break;" % result_temp)
+ code.putln(code.error_goto_if("%s == -1" % result_temp, self.pos))
+ code.funcstate.release_temp(result_temp)
+
+ # evaluate all coercions before the assignments
+ code.put_gotref(value_ref.result())
+ self.coerced_value_var.generate_evaluation_code(code)
+ self.value_target.generate_assignment_code(self.coerced_value_var, code)
+ value_ref.release(code)
+
+
def ForStatNode(pos, **kw):
if 'iterator' in kw:
if kw['iterator'].is_async:
@@ -6302,12 +6752,11 @@
is_async = True
- def __init__(self, pos, iterator, **kw):
+ def __init__(self, pos, **kw):
assert 'item' not in kw
from . import ExprNodes
# AwaitExprNodes must appear before running MarkClosureVisitor
- kw['iterator'] = ExprNodes.AIterAwaitExprNode(iterator.pos, arg=iterator)
- kw['item'] = ExprNodes.AwaitIterNextExprNode(iterator.pos, arg=None)
+ kw['item'] = ExprNodes.AwaitIterNextExprNode(kw['iterator'].pos, arg=None)
_ForInStatNode.__init__(self, pos, **kw)
def _create_item_node(self):
@@ -6364,10 +6813,27 @@
"Consider switching the directions of the relations.", 2)
self.step = self.step.analyse_types(env)
- if self.target.type.is_numeric:
- loop_type = self.target.type
+ self.set_up_loop(env)
+ target_type = self.target.type
+ if not (target_type.is_pyobject or target_type.is_numeric):
+ error(self.target.pos, "for-from loop variable must be c numeric type or Python object")
+
+ self.body = self.body.analyse_expressions(env)
+ if self.else_clause:
+ self.else_clause = self.else_clause.analyse_expressions(env)
+ return self
+
+ def set_up_loop(self, env):
+ from . import ExprNodes
+
+ target_type = self.target.type
+ if target_type.is_numeric:
+ loop_type = target_type
else:
- loop_type = PyrexTypes.c_int_type
+ if target_type.is_enum:
+ warning(self.target.pos,
+ "Integer loops over enum values are fragile. Please cast to a safe integer type instead.")
+ loop_type = PyrexTypes.c_long_type if target_type.is_pyobject else PyrexTypes.c_int_type
if not self.bound1.type.is_pyobject:
loop_type = PyrexTypes.widest_numeric_type(loop_type, self.bound1.type)
if not self.bound2.type.is_pyobject:
@@ -6383,10 +6849,7 @@
if not self.step.is_literal:
self.step = self.step.coerce_to_temp(env)
- target_type = self.target.type
- if not (target_type.is_pyobject or target_type.is_numeric):
- error(self.target.pos, "for-from loop variable must be c numeric type or Python object")
- if target_type.is_numeric:
+ if target_type.is_numeric or target_type.is_enum:
self.is_py_target = False
if isinstance(self.target, ExprNodes.BufferIndexNode):
raise error(self.pos, "Buffer or memoryview slicing/indexing not allowed as for-loop target.")
@@ -6396,12 +6859,7 @@
self.is_py_target = True
c_loopvar_node = ExprNodes.TempNode(self.pos, loop_type, env)
self.loopvar_node = c_loopvar_node
- self.py_loopvar_node = \
- ExprNodes.CloneNode(c_loopvar_node).coerce_to_pyobject(env)
- self.body = self.body.analyse_expressions(env)
- if self.else_clause:
- self.else_clause = self.else_clause.analyse_expressions(env)
- return self
+ self.py_loopvar_node = ExprNodes.CloneNode(c_loopvar_node).coerce_to_pyobject(env)
def generate_execution_code(self, code):
code.mark_pos(self.pos)
@@ -6413,21 +6871,25 @@
if self.step is not None:
self.step.generate_evaluation_code(code)
step = self.step.result()
- incop = "%s=%s" % (incop[0], step)
+ incop = "%s=%s" % (incop[0], step) # e.g. '++' => '+= STEP'
+ else:
+ step = '1'
+
from . import ExprNodes
if isinstance(self.loopvar_node, ExprNodes.TempNode):
self.loopvar_node.allocate(code)
if isinstance(self.py_loopvar_node, ExprNodes.TempNode):
self.py_loopvar_node.allocate(code)
- if from_range:
- loopvar_name = code.funcstate.allocate_temp(self.target.type, False)
+
+ loopvar_type = PyrexTypes.c_long_type if self.target.type.is_enum else self.target.type
+
+ if from_range and not self.is_py_target:
+ loopvar_name = code.funcstate.allocate_temp(loopvar_type, False)
else:
loopvar_name = self.loopvar_node.result()
- if self.target.type.is_int and not self.target.type.signed and self.relation2[0] == '>':
+ if loopvar_type.is_int and not loopvar_type.signed and self.relation2[0] == '>':
# Handle the case where the endpoint of an unsigned int iteration
# is within step of 0.
- if not self.step:
- step = 1
code.putln("for (%s = %s%s + %s; %s %s %s + %s; ) { %s%s;" % (
loopvar_name,
self.bound1.result(), offset, step,
@@ -6439,15 +6901,18 @@
self.bound1.result(), offset,
loopvar_name, self.relation2, self.bound2.result(),
loopvar_name, incop))
- if self.py_loopvar_node:
- self.py_loopvar_node.generate_evaluation_code(code)
- self.target.generate_assignment_code(self.py_loopvar_node, code)
- elif from_range:
- code.putln("%s = %s;" % (
- self.target.result(), loopvar_name))
+
+ coerced_loopvar_node = self.py_loopvar_node
+ if coerced_loopvar_node is None and from_range:
+ coerced_loopvar_node = ExprNodes.RawCNameExprNode(self.target.pos, loopvar_type, loopvar_name)
+ if coerced_loopvar_node is not None:
+ coerced_loopvar_node.generate_evaluation_code(code)
+ self.target.generate_assignment_code(coerced_loopvar_node, code)
+
self.body.generate_execution_code(code)
code.put_label(code.continue_label)
- if self.py_loopvar_node:
+
+ if not from_range and self.py_loopvar_node:
# This mess is to make for..from loops with python targets behave
# exactly like those with C targets with regards to re-assignment
# of the loop variable.
@@ -6459,15 +6924,15 @@
if self.target.entry.scope.is_module_scope:
code.globalstate.use_utility_code(
UtilityCode.load_cached("GetModuleGlobalName", "ObjectHandling.c"))
- lookup_func = '__Pyx_GetModuleGlobalName(%s)'
+ lookup_func = '__Pyx_GetModuleGlobalName(%s, %s); %s'
else:
code.globalstate.use_utility_code(
UtilityCode.load_cached("GetNameInClass", "ObjectHandling.c"))
- lookup_func = '__Pyx_GetNameInClass(%s, %%s)' % (
+ lookup_func = '__Pyx_GetNameInClass(%s, {}, %s); %s'.format(
self.target.entry.scope.namespace_cname)
- code.putln("%s = %s; %s" % (
+ code.putln(lookup_func % (
target_node.result(),
- lookup_func % interned_cname,
+ interned_cname,
code.error_goto_if_null(target_node.result(), self.target.pos)))
code.put_gotref(target_node.result())
else:
@@ -6479,14 +6944,17 @@
if self.target.entry.is_pyglobal:
code.put_decref(target_node.result(), target_node.type)
target_node.release(code)
+
code.putln("}")
- if self.py_loopvar_node:
+
+ if not from_range and self.py_loopvar_node:
# This is potentially wasteful, but we don't want the semantics to
# depend on whether or not the loop is a python type.
self.py_loopvar_node.generate_evaluation_code(code)
self.target.generate_assignment_code(self.py_loopvar_node, code)
- if from_range:
+ if from_range and not self.is_py_target:
code.funcstate.release_temp(loopvar_name)
+
break_label = code.break_label
code.set_loop_labels(old_loop_labels)
if self.else_clause:
@@ -6679,6 +7147,7 @@
# else_clause StatNode or None
child_attrs = ["body", "except_clauses", "else_clause"]
+ in_generator = False
def analyse_declarations(self, env):
self.body.analyse_declarations(env)
@@ -6705,6 +7174,9 @@
gil_message = "Try-except statement"
def generate_execution_code(self, code):
+ code.mark_pos(self.pos) # before changing the error label, in case of tracing errors
+ code.putln("{")
+
old_return_label = code.return_label
old_break_label = code.break_label
old_continue_label = code.continue_label
@@ -6720,8 +7192,6 @@
exc_save_vars = [code.funcstate.allocate_temp(py_object_type, False)
for _ in range(3)]
- code.mark_pos(self.pos)
- code.putln("{")
save_exc = code.insertion_point()
code.putln(
"/*try:*/ {")
@@ -6738,8 +7208,9 @@
if can_raise:
# inject code before the try block to save away the exception state
code.globalstate.use_utility_code(reset_exception_utility_code)
- save_exc.putln("__Pyx_PyThreadState_declare")
- save_exc.putln("__Pyx_PyThreadState_assign")
+ if not self.in_generator:
+ save_exc.putln("__Pyx_PyThreadState_declare")
+ save_exc.putln("__Pyx_PyThreadState_assign")
save_exc.putln("__Pyx_ExceptionSave(%s);" % (
', '.join(['&%s' % var for var in exc_save_vars])))
for var in exc_save_vars:
@@ -6753,7 +7224,8 @@
else:
# try block cannot raise exceptions, but we had to allocate the temps above,
# so just keep the C compiler from complaining about them being unused
- save_exc.putln("if (%s); else {/*mark used*/}" % '||'.join(exc_save_vars))
+ mark_vars_used = ["(void)%s;" % var for var in exc_save_vars]
+ save_exc.putln("%s /* mark used */" % ' '.join(mark_vars_used))
def restore_saved_exception():
pass
@@ -6777,11 +7249,16 @@
code.put_xdecref_clear(var, py_object_type)
code.put_goto(try_end_label)
code.put_label(our_error_label)
- code.putln("__Pyx_PyThreadState_assign") # re-assign in case a generator yielded
for temp_name, temp_type in temps_to_clean_up:
code.put_xdecref_clear(temp_name, temp_type)
+
+ outer_except = code.funcstate.current_except
+ # Currently points to self, but the ExceptClauseNode would also be ok. Change if needed.
+ code.funcstate.current_except = self
for except_clause in self.except_clauses:
except_clause.generate_handling_code(code, except_end_label)
+ code.funcstate.current_except = outer_except
+
if not self.has_default_clause:
code.put_goto(except_error_label)
@@ -6796,7 +7273,6 @@
code.put_label(exit_label)
code.mark_pos(self.pos, trace=False)
if can_raise:
- code.putln("__Pyx_PyThreadState_assign") # re-assign in case a generator yielded
restore_saved_exception()
code.put_goto(old_label)
@@ -6805,7 +7281,6 @@
code.put_goto(try_end_label)
code.put_label(except_end_label)
if can_raise:
- code.putln("__Pyx_PyThreadState_assign") # re-assign in case a generator yielded
restore_saved_exception()
if code.label_used(try_end_label):
code.put_label(try_end_label)
@@ -6880,19 +7355,42 @@
def generate_handling_code(self, code, end_label):
code.mark_pos(self.pos)
+
if self.pattern:
- code.globalstate.use_utility_code(UtilityCode.load_cached("PyErrExceptionMatches", "Exceptions.c"))
+ has_non_literals = not all(
+ pattern.is_literal or pattern.is_simple() and not pattern.is_temp
+ for pattern in self.pattern)
+
+ if has_non_literals:
+ # For non-trivial exception check expressions, hide the live exception from C-API calls.
+ exc_vars = [code.funcstate.allocate_temp(py_object_type, manage_ref=True)
+ for _ in range(3)]
+ code.globalstate.use_utility_code(UtilityCode.load_cached("PyErrFetchRestore", "Exceptions.c"))
+ code.putln("__Pyx_ErrFetch(&%s, &%s, &%s);" % tuple(exc_vars))
+ code.globalstate.use_utility_code(UtilityCode.load_cached("FastTypeChecks", "ModuleSetupCode.c"))
+ exc_test_func = "__Pyx_PyErr_GivenExceptionMatches(%s, %%s)" % exc_vars[0]
+ else:
+ exc_vars = ()
+ code.globalstate.use_utility_code(UtilityCode.load_cached("PyErrExceptionMatches", "Exceptions.c"))
+ exc_test_func = "__Pyx_PyErr_ExceptionMatches(%s)"
+
exc_tests = []
for pattern in self.pattern:
pattern.generate_evaluation_code(code)
- exc_tests.append("__Pyx_PyErr_ExceptionMatches(%s)" % pattern.py_result())
+ exc_tests.append(exc_test_func % pattern.py_result())
- match_flag = code.funcstate.allocate_temp(PyrexTypes.c_int_type, False)
- code.putln(
- "%s = %s;" % (match_flag, ' || '.join(exc_tests)))
+ match_flag = code.funcstate.allocate_temp(PyrexTypes.c_int_type, manage_ref=False)
+ code.putln("%s = %s;" % (match_flag, ' || '.join(exc_tests)))
for pattern in self.pattern:
pattern.generate_disposal_code(code)
pattern.free_temps(code)
+
+ if has_non_literals:
+ code.putln("__Pyx_ErrRestore(%s, %s, %s);" % tuple(exc_vars))
+ code.putln(' '.join(["%s = 0;" % var for var in exc_vars]))
+ for temp in exc_vars:
+ code.funcstate.release_temp(temp)
+
code.putln(
"if (%s) {" %
match_flag)
@@ -6911,8 +7409,7 @@
code.putln("}")
return
- exc_vars = [code.funcstate.allocate_temp(py_object_type,
- manage_ref=True)
+ exc_vars = [code.funcstate.allocate_temp(py_object_type, manage_ref=True)
for _ in range(3)]
code.put_add_traceback(self.function_name)
# We always have to fetch the exception value even if
@@ -6922,8 +7419,8 @@
exc_args = "&%s, &%s, &%s" % tuple(exc_vars)
code.putln("if (__Pyx_GetException(%s) < 0) %s" % (
exc_args, code.error_goto(self.pos)))
- for x in exc_vars:
- code.put_gotref(x)
+ for var in exc_vars:
+ code.put_gotref(var)
if self.target:
self.exc_value.set_var(exc_vars[1])
self.exc_value.generate_evaluation_code(code)
@@ -6940,9 +7437,12 @@
code.funcstate.exc_vars = exc_vars
self.body.generate_execution_code(code)
code.funcstate.exc_vars = old_exc_vars
+
if not self.body.is_terminator:
for var in exc_vars:
- code.put_decref_clear(var, py_object_type)
+ # FIXME: XDECREF() is needed to allow re-raising (which clears the exc_vars),
+ # but I don't think it's the right solution.
+ code.put_xdecref_clear(var, py_object_type)
code.put_goto(end_label)
for new_label, old_label in [(code.break_label, old_break_label),
@@ -6981,6 +7481,7 @@
# body StatNode
# finally_clause StatNode
# finally_except_clause deep-copy of finally_clause for exception case
+ # in_generator inside of generator => must store away current exception also in return case
#
# Each of the continue, break, return and error gotos runs
# into its own deep-copy of the finally block code.
@@ -6998,6 +7499,7 @@
finally_except_clause = None
is_try_finally_in_nogil = False
+ in_generator = False
@staticmethod
def create_analysed(pos, env, body, finally_clause):
@@ -7022,7 +7524,9 @@
gil_message = "Try-finally statement"
def generate_execution_code(self, code):
- code.mark_pos(self.pos)
+ code.mark_pos(self.pos) # before changing the error label, in case of tracing errors
+ code.putln("/*try:*/ {")
+
old_error_label = code.error_label
old_labels = code.all_new_labels()
new_labels = code.get_all_labels()
@@ -7031,7 +7535,6 @@
code.error_label = old_error_label
catch_label = code.new_label()
- code.putln("/*try:*/ {")
was_in_try_finally = code.funcstate.in_try_finally
code.funcstate.in_try_finally = 1
@@ -7039,12 +7542,14 @@
code.funcstate.in_try_finally = was_in_try_finally
code.putln("}")
- code.set_all_labels(old_labels)
temps_to_clean_up = code.funcstate.all_free_managed_temps()
code.mark_pos(self.finally_clause.pos)
code.putln("/*finally:*/ {")
+ # Reset labels only after writing out a potential line trace call for correct nogil error handling.
+ code.set_all_labels(old_labels)
+
def fresh_finally_clause(_next=[self.finally_clause]):
# generate the original subtree once and always keep a fresh copy
node = _next[0]
@@ -7066,8 +7571,10 @@
code.putln('}')
if preserve_error:
+ code.put_label(new_error_label)
code.putln('/*exception exit:*/{')
- code.putln("__Pyx_PyThreadState_declare")
+ if not self.in_generator:
+ code.putln("__Pyx_PyThreadState_declare")
if self.is_try_finally_in_nogil:
code.declare_gilstate()
if needs_success_cleanup:
@@ -7082,7 +7589,6 @@
exc_vars = tuple([
code.funcstate.allocate_temp(py_object_type, manage_ref=False)
for _ in range(6)])
- code.put_label(new_error_label)
self.put_error_catcher(
code, temps_to_clean_up, exc_vars, exc_lineno_cnames, exc_filename_cname)
finally_old_labels = code.all_new_labels()
@@ -7116,32 +7622,46 @@
code.set_all_labels(old_labels)
return_label = code.return_label
+ exc_vars = ()
+
for i, (new_label, old_label) in enumerate(zip(new_labels, old_labels)):
if not code.label_used(new_label):
continue
if new_label == new_error_label and preserve_error:
continue # handled above
- code.put('%s: ' % new_label)
- code.putln('{')
+ code.putln('%s: {' % new_label)
ret_temp = None
- if old_label == return_label and not self.finally_clause.is_terminator:
- # store away return value for later reuse
- if (self.func_return_type and
- not self.is_try_finally_in_nogil and
- not isinstance(self.finally_clause, GILExitNode)):
- ret_temp = code.funcstate.allocate_temp(
- self.func_return_type, manage_ref=False)
- code.putln("%s = %s;" % (ret_temp, Naming.retval_cname))
- if self.func_return_type.is_pyobject:
- code.putln("%s = 0;" % Naming.retval_cname)
+ if old_label == return_label:
+ # return actually raises an (uncatchable) exception in generators that we must preserve
+ if self.in_generator:
+ exc_vars = tuple([
+ code.funcstate.allocate_temp(py_object_type, manage_ref=False)
+ for _ in range(6)])
+ self.put_error_catcher(code, [], exc_vars)
+ if not self.finally_clause.is_terminator:
+ # store away return value for later reuse
+ if (self.func_return_type and
+ not self.is_try_finally_in_nogil and
+ not isinstance(self.finally_clause, GILExitNode)):
+ ret_temp = code.funcstate.allocate_temp(
+ self.func_return_type, manage_ref=False)
+ code.putln("%s = %s;" % (ret_temp, Naming.retval_cname))
+ if self.func_return_type.is_pyobject:
+ code.putln("%s = 0;" % Naming.retval_cname)
+
fresh_finally_clause().generate_execution_code(code)
- if ret_temp:
- code.putln("%s = %s;" % (Naming.retval_cname, ret_temp))
- if self.func_return_type.is_pyobject:
- code.putln("%s = 0;" % ret_temp)
- code.funcstate.release_temp(ret_temp)
- ret_temp = None
+
+ if old_label == return_label:
+ if ret_temp:
+ code.putln("%s = %s;" % (Naming.retval_cname, ret_temp))
+ if self.func_return_type.is_pyobject:
+ code.putln("%s = 0;" % ret_temp)
+ code.funcstate.release_temp(ret_temp)
+ ret_temp = None
+ if self.in_generator:
+ self.put_error_uncatcher(code, exc_vars)
+
if not self.finally_clause.is_terminator:
code.put_goto(old_label)
code.putln('}')
@@ -7156,16 +7676,16 @@
self.finally_clause.generate_function_definitions(env, code)
def put_error_catcher(self, code, temps_to_clean_up, exc_vars,
- exc_lineno_cnames, exc_filename_cname):
+ exc_lineno_cnames=None, exc_filename_cname=None):
code.globalstate.use_utility_code(restore_exception_utility_code)
code.globalstate.use_utility_code(get_exception_utility_code)
code.globalstate.use_utility_code(swap_exception_utility_code)
- code.putln(' '.join(["%s = 0;"]*len(exc_vars)) % exc_vars)
if self.is_try_finally_in_nogil:
code.put_ensure_gil(declare_gilstate=False)
code.putln("__Pyx_PyThreadState_assign")
+ code.putln(' '.join(["%s = 0;" % var for var in exc_vars]))
for temp_name, type in temps_to_clean_up:
code.put_xdecref_clear(temp_name, type)
@@ -7189,13 +7709,12 @@
if self.is_try_finally_in_nogil:
code.put_release_ensured_gil()
- def put_error_uncatcher(self, code, exc_vars, exc_lineno_cnames, exc_filename_cname):
+ def put_error_uncatcher(self, code, exc_vars, exc_lineno_cnames=None, exc_filename_cname=None):
code.globalstate.use_utility_code(restore_exception_utility_code)
code.globalstate.use_utility_code(reset_exception_utility_code)
if self.is_try_finally_in_nogil:
code.put_ensure_gil(declare_gilstate=False)
- code.putln("__Pyx_PyThreadState_assign") # re-assign in case a generator yielded
# not using preprocessor here to avoid warnings about
# unused utility functions and/or temps
@@ -7211,7 +7730,7 @@
if self.is_try_finally_in_nogil:
code.put_release_ensured_gil()
- code.putln(' '.join(["%s = 0;"]*len(exc_vars)) % exc_vars)
+ code.putln(' '.join(["%s = 0;" % var for var in exc_vars]))
if exc_lineno_cnames:
code.putln("%s = %s; %s = %s; %s = %s;" % (
Naming.lineno_cname, exc_lineno_cnames[0],
@@ -7222,7 +7741,6 @@
code.globalstate.use_utility_code(reset_exception_utility_code)
if self.is_try_finally_in_nogil:
code.put_ensure_gil(declare_gilstate=False)
- code.putln("__Pyx_PyThreadState_assign") # re-assign in case a generator yielded
# not using preprocessor here to avoid warnings about
# unused utility functions and/or temps
@@ -7271,7 +7789,7 @@
from .ParseTreeTransforms import YieldNodeCollector
collector = YieldNodeCollector()
collector.visitchildren(body)
- if not collector.yields and not collector.awaits:
+ if not collector.yields:
return
if state == 'gil':
@@ -7688,11 +8206,17 @@
if self.kwargs:
# Try to find num_threads and chunksize keyword arguments
pairs = []
+ seen = set()
for dictitem in self.kwargs.key_value_pairs:
+ if dictitem.key.value in seen:
+ error(self.pos, "Duplicate keyword argument found: %s" % dictitem.key.value)
+ seen.add(dictitem.key.value)
if dictitem.key.value == 'num_threads':
- self.num_threads = dictitem.value
+ if not dictitem.value.is_none:
+ self.num_threads = dictitem.value
elif self.is_prange and dictitem.key.value == 'chunksize':
- self.chunksize = dictitem.value
+ if not dictitem.value.is_none:
+ self.chunksize = dictitem.value
else:
pairs.append(dictitem)
@@ -7732,7 +8256,7 @@
self.num_threads.compile_time_value(env) <= 0):
error(self.pos, "argument to num_threads must be greater than 0")
- if not self.num_threads.is_simple():
+ if not self.num_threads.is_simple() or self.num_threads.type.is_pyobject:
self.num_threads = self.num_threads.coerce_to(
PyrexTypes.c_int_type, env).coerce_to_temp(env)
return self
@@ -7920,9 +8444,10 @@
Make any used temporaries private. Before the relevant code block
code.start_collecting_temps() should have been called.
"""
- if self.is_parallel:
- c = self.privatization_insertion_point
+ c = self.privatization_insertion_point
+ self.privatization_insertion_point = None
+ if self.is_parallel:
self.temps = temps = code.funcstate.stop_collecting_temps()
privates, firstprivates = [], []
for temp, type in sorted(temps):
@@ -8013,8 +8538,10 @@
If compiled without OpenMP support (at the C level), then we still have
to acquire the GIL to decref any object temporaries.
"""
+ begin_code = self.begin_of_parallel_block
+ self.begin_of_parallel_block = None
+
if self.error_label_used:
- begin_code = self.begin_of_parallel_block
end_code = code
begin_code.putln("#ifdef _OPENMP")
@@ -8227,6 +8754,8 @@
the for loop.
"""
c = self.begin_of_parallel_control_block_point
+ self.begin_of_parallel_control_block_point = None
+ self.begin_of_parallel_control_block_point_after_decls = None
# Firstly, always prefer errors over returning, continue or break
if self.error_label_used:
@@ -8577,8 +9106,6 @@
self.setup_parallel_control_flow_block(code) # parallel control flow block
- self.control_flow_var_code_point = code.insertion_point()
-
# Note: nsteps is private in an outer scope if present
code.putln("%(nsteps)s = (%(stop)s - %(start)s + %(step)s - %(step)s/abs(%(step)s)) / %(step)s;" % fmt_dict)
diff -Nru cython-0.26.1/Cython/Compiler/Optimize.py cython-0.29.14/Cython/Compiler/Optimize.py
--- cython-0.26.1/Cython/Compiler/Optimize.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Optimize.py 2019-05-27 19:37:21.000000000 +0000
@@ -1,5 +1,6 @@
from __future__ import absolute_import
+import re
import sys
import copy
import codecs
@@ -8,14 +9,16 @@
from . import TypeSlots
from .ExprNodes import not_a_constant
import cython
-cython.declare(UtilityCode=object, EncodedString=object, bytes_literal=object,
+cython.declare(UtilityCode=object, EncodedString=object, bytes_literal=object, encoded_string=object,
Nodes=object, ExprNodes=object, PyrexTypes=object, Builtin=object,
UtilNodes=object, _py_int_types=object)
if sys.version_info[0] >= 3:
_py_int_types = int
+ _py_string_types = (bytes, str)
else:
_py_int_types = (int, long)
+ _py_string_types = (bytes, unicode)
from . import Nodes
from . import ExprNodes
@@ -26,8 +29,8 @@
from . import Options
from .Code import UtilityCode, TempitaUtilityCode
-from .StringEncoding import EncodedString, bytes_literal
-from .Errors import error
+from .StringEncoding import EncodedString, bytes_literal, encoded_string
+from .Errors import error, warning
from .ParseTreeTransforms import SkipDeclarations
try:
@@ -186,38 +189,61 @@
self.visitchildren(node)
return self._optimise_for_loop(node, node.iterator.sequence)
- def _optimise_for_loop(self, node, iterator, reversed=False):
- if iterator.type is Builtin.dict_type:
+ def _optimise_for_loop(self, node, iterable, reversed=False):
+ annotation_type = None
+ if (iterable.is_name or iterable.is_attribute) and iterable.entry and iterable.entry.annotation:
+ annotation = iterable.entry.annotation
+ if annotation.is_subscript:
+ annotation = annotation.base # container base type
+ # FIXME: generalise annotation evaluation => maybe provide a "qualified name" also for imported names?
+ if annotation.is_name:
+ if annotation.entry and annotation.entry.qualified_name == 'typing.Dict':
+ annotation_type = Builtin.dict_type
+ elif annotation.name == 'Dict':
+ annotation_type = Builtin.dict_type
+ if annotation.entry and annotation.entry.qualified_name in ('typing.Set', 'typing.FrozenSet'):
+ annotation_type = Builtin.set_type
+ elif annotation.name in ('Set', 'FrozenSet'):
+ annotation_type = Builtin.set_type
+
+ if Builtin.dict_type in (iterable.type, annotation_type):
# like iterating over dict.keys()
if reversed:
# CPython raises an error here: not a sequence
return node
return self._transform_dict_iteration(
- node, dict_obj=iterator, method=None, keys=True, values=False)
+ node, dict_obj=iterable, method=None, keys=True, values=False)
+
+ if (Builtin.set_type in (iterable.type, annotation_type) or
+ Builtin.frozenset_type in (iterable.type, annotation_type)):
+ if reversed:
+ # CPython raises an error here: not a sequence
+ return node
+ return self._transform_set_iteration(node, iterable)
# C array (slice) iteration?
- if iterator.type.is_ptr or iterator.type.is_array:
- return self._transform_carray_iteration(node, iterator, reversed=reversed)
- if iterator.type is Builtin.bytes_type:
- return self._transform_bytes_iteration(node, iterator, reversed=reversed)
- if iterator.type is Builtin.unicode_type:
- return self._transform_unicode_iteration(node, iterator, reversed=reversed)
+ if iterable.type.is_ptr or iterable.type.is_array:
+ return self._transform_carray_iteration(node, iterable, reversed=reversed)
+ if iterable.type is Builtin.bytes_type:
+ return self._transform_bytes_iteration(node, iterable, reversed=reversed)
+ if iterable.type is Builtin.unicode_type:
+ return self._transform_unicode_iteration(node, iterable, reversed=reversed)
# the rest is based on function calls
- if not isinstance(iterator, ExprNodes.SimpleCallNode):
+ if not isinstance(iterable, ExprNodes.SimpleCallNode):
return node
- if iterator.args is None:
- arg_count = iterator.arg_tuple and len(iterator.arg_tuple.args) or 0
+ if iterable.args is None:
+ arg_count = iterable.arg_tuple and len(iterable.arg_tuple.args) or 0
else:
- arg_count = len(iterator.args)
- if arg_count and iterator.self is not None:
+ arg_count = len(iterable.args)
+ if arg_count and iterable.self is not None:
arg_count -= 1
- function = iterator.function
+ function = iterable.function
# dict iteration?
if function.is_attribute and not reversed and not arg_count:
- base_obj = iterator.self or function.obj
+ base_obj = iterable.self or function.obj
method = function.attribute
# in Py3, items() is equivalent to Py2's iteritems()
is_safe_iter = self.global_scope().context.language_level >= 3
@@ -245,25 +271,35 @@
node, base_obj, method, keys, values)
# enumerate/reversed ?
- if iterator.self is None and function.is_name and \
+ if iterable.self is None and function.is_name and \
function.entry and function.entry.is_builtin:
if function.name == 'enumerate':
if reversed:
# CPython raises an error here: not a sequence
return node
- return self._transform_enumerate_iteration(node, iterator)
+ return self._transform_enumerate_iteration(node, iterable)
elif function.name == 'reversed':
if reversed:
# CPython raises an error here: not a sequence
return node
- return self._transform_reversed_iteration(node, iterator)
+ return self._transform_reversed_iteration(node, iterable)
# range() iteration?
- if Options.convert_range and node.target.type.is_int:
- if iterator.self is None and function.is_name and \
- function.entry and function.entry.is_builtin and \
- function.name in ('range', 'xrange'):
- return self._transform_range_iteration(node, iterator, reversed=reversed)
+ if Options.convert_range and arg_count >= 1 and (
+ iterable.self is None and
+ function.is_name and function.name in ('range', 'xrange') and
+ function.entry and function.entry.is_builtin):
+ if node.target.type.is_int or node.target.type.is_enum:
+ return self._transform_range_iteration(node, iterable, reversed=reversed)
+ if node.target.type.is_pyobject:
+ # Assume that small integer ranges (C long >= 32bit) are best handled in C as well.
+ for arg in (iterable.arg_tuple.args if iterable.args is None else iterable.args):
+ if isinstance(arg, ExprNodes.IntNode):
+ if arg.has_constant_result() and -2**30 <= arg.constant_result < 2**30:
+ continue
+ break
+ else:
+ return self._transform_range_iteration(node, iterable, reversed=reversed)
return node
@@ -768,6 +804,7 @@
step=step, body=node.body,
else_clause=node.else_clause,
from_range=True)
+ for_node.set_up_loop(self.current_env())
if bound2_is_temp:
for_node = UtilNodes.LetNode(bound2, for_node)
@@ -892,7 +929,7 @@
method_node = ExprNodes.StringNode(
dict_obj.pos, is_identifier=True, value=method)
dict_obj = dict_obj.as_none_safe_node(
- "'NoneType' object has no attribute '%s'",
+ "'NoneType' object has no attribute '%{0}s'".format('.30' if len(method) <= 30 else ''),
error = "PyExc_AttributeError",
format_args = [method])
else:
@@ -946,6 +983,85 @@
PyrexTypes.CFuncTypeArg("p_is_dict", PyrexTypes.c_int_ptr_type, None),
])
+ PySet_Iterator_func_type = PyrexTypes.CFuncType(
+ PyrexTypes.py_object_type, [
+ PyrexTypes.CFuncTypeArg("set", PyrexTypes.py_object_type, None),
+ PyrexTypes.CFuncTypeArg("is_set", PyrexTypes.c_int_type, None),
+ PyrexTypes.CFuncTypeArg("p_orig_length", PyrexTypes.c_py_ssize_t_ptr_type, None),
+ PyrexTypes.CFuncTypeArg("p_is_set", PyrexTypes.c_int_ptr_type, None),
+ ])
+
+ def _transform_set_iteration(self, node, set_obj):
+ temps = []
+ temp = UtilNodes.TempHandle(PyrexTypes.py_object_type)
+ temps.append(temp)
+ set_temp = temp.ref(set_obj.pos)
+ temp = UtilNodes.TempHandle(PyrexTypes.c_py_ssize_t_type)
+ temps.append(temp)
+ pos_temp = temp.ref(node.pos)
+
+ if isinstance(node.body, Nodes.StatListNode):
+ body = node.body
+ else:
+ body = Nodes.StatListNode(pos = node.body.pos,
+ stats = [node.body])
+
+ # keep original length to guard against set modification
+ set_len_temp = UtilNodes.TempHandle(PyrexTypes.c_py_ssize_t_type)
+ temps.append(set_len_temp)
+ set_len_temp_addr = ExprNodes.AmpersandNode(
+ node.pos, operand=set_len_temp.ref(set_obj.pos),
+ type=PyrexTypes.c_ptr_type(set_len_temp.type))
+ temp = UtilNodes.TempHandle(PyrexTypes.c_int_type)
+ temps.append(temp)
+ is_set_temp = temp.ref(node.pos)
+ is_set_temp_addr = ExprNodes.AmpersandNode(
+ node.pos, operand=is_set_temp,
+ type=PyrexTypes.c_ptr_type(temp.type))
+
+ value_target = node.target
+ iter_next_node = Nodes.SetIterationNextNode(
+ set_temp, set_len_temp.ref(set_obj.pos), pos_temp, value_target, is_set_temp)
+ iter_next_node = iter_next_node.analyse_expressions(self.current_env())
+ body.stats[0:0] = [iter_next_node]
+
+ def flag_node(value):
+ value = value and 1 or 0
+ return ExprNodes.IntNode(node.pos, value=str(value), constant_result=value)
+
+ result_code = [
+ Nodes.SingleAssignmentNode(
+ node.pos,
+ lhs=pos_temp,
+ rhs=ExprNodes.IntNode(node.pos, value='0', constant_result=0)),
+ Nodes.SingleAssignmentNode(
+ set_obj.pos,
+ lhs=set_temp,
+ rhs=ExprNodes.PythonCapiCallNode(
+ set_obj.pos,
+ "__Pyx_set_iterator",
+ self.PySet_Iterator_func_type,
+ utility_code=UtilityCode.load_cached("set_iter", "Optimize.c"),
+ args=[set_obj, flag_node(set_obj.type is Builtin.set_type),
+ set_len_temp_addr, is_set_temp_addr,
+ ],
+ is_temp=True,
+ )),
+ Nodes.WhileStatNode(
+ node.pos,
+ condition=None,
+ body=body,
+ else_clause=node.else_clause,
+ )
+ ]
+
+ return UtilNodes.TempsBlockNode(
+ node.pos, temps=temps,
+ body=Nodes.StatListNode(
+ node.pos,
+ stats = result_code
+ ))
+
class SwitchTransform(Visitor.EnvTransform):
"""
@@ -1076,9 +1192,9 @@
if common_var is None:
self.visitchildren(node)
return node
- cases.append(Nodes.SwitchCaseNode(pos = if_clause.pos,
- conditions = conditions,
- body = if_clause.body))
+ cases.append(Nodes.SwitchCaseNode(pos=if_clause.pos,
+ conditions=conditions,
+ body=if_clause.body))
condition_values = [
cond for case in cases for cond in case.conditions]
@@ -1089,11 +1205,16 @@
self.visitchildren(node)
return node
+ # Recurse into body subtrees that we left untouched so far.
+ self.visitchildren(node, 'else_clause')
+ for case in cases:
+ self.visitchildren(case, 'body')
+
common_var = unwrap_node(common_var)
- switch_node = Nodes.SwitchStatNode(pos = node.pos,
- test = common_var,
- cases = cases,
- else_clause = node.else_clause)
+ switch_node = Nodes.SwitchStatNode(pos=node.pos,
+ test=common_var,
+ cases=cases,
+ else_clause=node.else_clause)
return switch_node
def visit_CondExprNode(self, node):
@@ -1104,10 +1225,11 @@
not_in, common_var, conditions = self.extract_common_conditions(
None, node.test, True)
if common_var is None \
- or len(conditions) < 2 \
- or self.has_duplicate_values(conditions):
+ or len(conditions) < 2 \
+ or self.has_duplicate_values(conditions):
self.visitchildren(node)
return node
+
return self.build_simple_switch_statement(
node, common_var, conditions, not_in,
node.true_val, node.false_val)
@@ -1120,8 +1242,8 @@
not_in, common_var, conditions = self.extract_common_conditions(
None, node, True)
if common_var is None \
- or len(conditions) < 2 \
- or self.has_duplicate_values(conditions):
+ or len(conditions) < 2 \
+ or self.has_duplicate_values(conditions):
self.visitchildren(node)
node.wrap_operands(self.current_env()) # in case we changed the operands
return node
@@ -1139,8 +1261,8 @@
not_in, common_var, conditions = self.extract_common_conditions(
None, node, True)
if common_var is None \
- or len(conditions) < 2 \
- or self.has_duplicate_values(conditions):
+ or len(conditions) < 2 \
+ or self.has_duplicate_values(conditions):
self.visitchildren(node)
return node
@@ -1909,16 +2031,11 @@
"""
### cleanup to avoid redundant coercions to/from Python types
- def _visit_PyTypeTestNode(self, node):
- # disabled - appears to break assignments in some cases, and
- # also drops a None check, which might still be required
+ def visit_PyTypeTestNode(self, node):
"""Flatten redundant type checks after tree changes.
"""
- old_arg = node.arg
self.visitchildren(node)
- if old_arg is node.arg or node.arg.type != node.type:
- return node
- return node.arg
+ return node.reanalyse()
def _visit_TypecastNode(self, node):
# disabled - the user may have had a reason to put a type
@@ -1933,11 +2050,18 @@
def visit_ExprStatNode(self, node):
"""
- Drop useless coercions.
+ Drop dead code and useless coercions.
"""
self.visitchildren(node)
if isinstance(node.expr, ExprNodes.CoerceToPyTypeNode):
node.expr = node.expr.arg
+ expr = node.expr
+ if expr is None or expr.is_none or expr.is_literal:
+ # Expression was removed or is dead code => remove ExprStatNode as well.
+ return None
+ if expr.is_name and expr.entry and (expr.entry.is_local or expr.entry.is_arg):
+ # Ignore dead references to local variables etc.
+ return None
return node
def visit_CoerceToBooleanNode(self, node):
@@ -2155,7 +2279,8 @@
attribute=attr_name,
is_called=True).analyse_as_type_attribute(self.current_env())
if method is None:
- return node
+ return self._optimise_generic_builtin_method_call(
+ node, attr_name, function, arg_list, is_unbound_method)
args = node.args
if args is None and node.arg_tuple:
args = node.arg_tuple.args
@@ -2171,6 +2296,62 @@
### builtin types
+ def _optimise_generic_builtin_method_call(self, node, attr_name, function, arg_list, is_unbound_method):
+ """
+ Try to inject an unbound method call for a call to a method of a known builtin type.
+ This enables caching the underlying C function of the method at runtime.
+ """
+ arg_count = len(arg_list)
+ if is_unbound_method or arg_count >= 3 or not (function.is_attribute and function.is_py_attr):
+ return node
+ if not function.obj.type.is_builtin_type:
+ return node
+ if function.obj.type.name in ('basestring', 'type'):
+ # these allow different actual types => unsafe
+ return node
+ return ExprNodes.CachedBuiltinMethodCallNode(
+ node, function.obj, attr_name, arg_list)
+
+ PyObject_Unicode_func_type = PyrexTypes.CFuncType(
+ Builtin.unicode_type, [
+ PyrexTypes.CFuncTypeArg("obj", PyrexTypes.py_object_type, None)
+ ])
+
+ def _handle_simple_function_unicode(self, node, function, pos_args):
+ """Optimise single argument calls to unicode().
+ """
+ if len(pos_args) != 1:
+ if len(pos_args) == 0:
+ return ExprNodes.UnicodeNode(node.pos, value=EncodedString(), constant_result=u'')
+ return node
+ arg = pos_args[0]
+ if arg.type is Builtin.unicode_type:
+ if not arg.may_be_none():
+ return arg
+ cname = "__Pyx_PyUnicode_Unicode"
+ utility_code = UtilityCode.load_cached('PyUnicode_Unicode', 'StringTools.c')
+ else:
+ cname = "__Pyx_PyObject_Unicode"
+ utility_code = UtilityCode.load_cached('PyObject_Unicode', 'StringTools.c')
+ return ExprNodes.PythonCapiCallNode(
+ node.pos, cname, self.PyObject_Unicode_func_type,
+ args=pos_args,
+ is_temp=node.is_temp,
+ utility_code=utility_code,
+ py_name="unicode")
+
+ def visit_FormattedValueNode(self, node):
+ """Simplify or avoid plain string formatting of a unicode value.
+ This seems misplaced here, but plain unicode formatting is essentially
+ a call to the unicode() builtin, which is optimised right above.
+ """
+ self.visitchildren(node)
+ if node.value.type is Builtin.unicode_type and not node.c_format_spec and not node.format_spec:
+ if not node.conversion_char or node.conversion_char == 's':
+ # value is definitely a unicode string and we don't format it any special
+ return self._handle_simple_function_unicode(node, None, [node.value])
+ return node
+
PyDict_Copy_func_type = PyrexTypes.CFuncType(
Builtin.dict_type, [
PyrexTypes.CFuncTypeArg("dict", Builtin.dict_type, None)
@@ -2210,14 +2391,10 @@
PyrexTypes.CFuncTypeArg("list", Builtin.list_type, None)
])
- PySequence_Tuple_func_type = PyrexTypes.CFuncType(
- Builtin.tuple_type,
- [PyrexTypes.CFuncTypeArg("it", PyrexTypes.py_object_type, None)])
-
def _handle_simple_function_tuple(self, node, function, pos_args):
"""Replace tuple([...]) by PyList_AsTuple or PySequence_Tuple.
"""
- if len(pos_args) != 1:
+ if len(pos_args) != 1 or not node.is_temp:
return node
arg = pos_args[0]
if arg.type is Builtin.tuple_type and not arg.may_be_none():
@@ -2230,9 +2407,7 @@
node.pos, "PyList_AsTuple", self.PyList_AsTuple_func_type,
args=pos_args, is_temp=node.is_temp)
else:
- return ExprNodes.PythonCapiCallNode(
- node.pos, "PySequence_Tuple", self.PySequence_Tuple_func_type,
- args=pos_args, is_temp=node.is_temp)
+ return ExprNodes.AsTupleNode(node.pos, arg=arg, type=Builtin.tuple_type)
PySet_New_func_type = PyrexTypes.CFuncType(
Builtin.set_type, [
@@ -2398,6 +2573,7 @@
_map_to_capi_len_function = {
Builtin.unicode_type: "__Pyx_PyUnicode_GET_LENGTH",
Builtin.bytes_type: "PyBytes_GET_SIZE",
+ Builtin.bytearray_type: 'PyByteArray_GET_SIZE',
Builtin.list_type: "PyList_GET_SIZE",
Builtin.tuple_type: "PyTuple_GET_SIZE",
Builtin.set_type: "PySet_GET_SIZE",
@@ -2429,6 +2605,14 @@
node.pos, "__Pyx_Py_UNICODE_strlen", self.Pyx_Py_UNICODE_strlen_func_type,
args = [arg],
is_temp = node.is_temp)
+ elif arg.type.is_memoryviewslice:
+ func_type = PyrexTypes.CFuncType(
+ PyrexTypes.c_size_t_type, [
+ PyrexTypes.CFuncTypeArg("memoryviewslice", arg.type, None)
+ ], nogil=True)
+ new_node = ExprNodes.PythonCapiCallNode(
+ node.pos, "__Pyx_MemoryView_Len", func_type,
+ args=[arg], is_temp=node.is_temp)
elif arg.type.is_pyobject:
cfunc_name = self._map_to_capi_len_function(arg.type)
if cfunc_name is None:
@@ -2442,8 +2626,7 @@
"object of type 'NoneType' has no len()")
new_node = ExprNodes.PythonCapiCallNode(
node.pos, cfunc_name, self.PyObject_Size_func_type,
- args = [arg],
- is_temp = node.is_temp)
+ args=[arg], is_temp=node.is_temp)
elif arg.type.is_unicode_char:
return ExprNodes.IntNode(node.pos, value='1', constant_result=1,
type=node.type)
@@ -2624,7 +2807,7 @@
PyTypeObjectPtr = PyrexTypes.CPtrType(
cython_scope.lookup('PyTypeObject').type)
pyx_tp_new_kwargs_func_type = PyrexTypes.CFuncType(
- PyrexTypes.py_object_type, [
+ ext_type, [
PyrexTypes.CFuncTypeArg("type", PyTypeObjectPtr, None),
PyrexTypes.CFuncTypeArg("args", PyrexTypes.py_object_type, None),
PyrexTypes.CFuncTypeArg("kwargs", PyrexTypes.py_object_type, None),
@@ -2637,6 +2820,7 @@
node.pos, slot_func_cname,
pyx_tp_new_kwargs_func_type,
args=[type_arg, args_tuple, kwargs],
+ may_return_none=False,
is_temp=True)
else:
# arbitrary variable, needs a None check for safety
@@ -2684,6 +2868,69 @@
utility_code=load_c_utility('append')
)
+ def _handle_simple_method_list_extend(self, node, function, args, is_unbound_method):
+ """Replace list.extend([...]) for short sequence literals values by sequential appends
+ to avoid creating an intermediate sequence argument.
+ """
+ if len(args) != 2:
+ return node
+ obj, value = args
+ if not value.is_sequence_constructor:
+ return node
+ items = list(value.args)
+ if value.mult_factor is not None or len(items) > 8:
+ # Appending wins for short sequences but slows down when multiple resize operations are needed.
+ # This seems to be a good enough limit that avoids repeated resizing.
+ if False and isinstance(value, ExprNodes.ListNode):
+ # One would expect that tuples are more efficient here, but benchmarking with
+ # Py3.5 and Py3.7 suggests that they are not. Probably worth revisiting at some point.
+ # Might be related to the usage of PySequence_FAST() in CPython's list.extend(),
+ # which is probably tuned more towards lists than tuples (and rightly so).
+ tuple_node = args[1].as_tuple().analyse_types(self.current_env(), skip_children=True)
+ Visitor.recursively_replace_node(node, args[1], tuple_node)
+ return node
+ wrapped_obj = self._wrap_self_arg(obj, function, is_unbound_method, 'extend')
+ if not items:
+ # Empty sequences are not likely to occur, but why waste a call to list.extend() for them?
+ wrapped_obj.result_is_used = node.result_is_used
+ return wrapped_obj
+ cloned_obj = obj = wrapped_obj
+ if len(items) > 1 and not obj.is_simple():
+ cloned_obj = UtilNodes.LetRefNode(obj)
+ # Use ListComp_Append() for all but the last item and finish with PyList_Append()
+ # to shrink the list storage size at the very end if necessary.
+ temps = []
+ arg = items[-1]
+ if not arg.is_simple():
+ arg = UtilNodes.LetRefNode(arg)
+ temps.append(arg)
+ new_node = ExprNodes.PythonCapiCallNode(
+ node.pos, "__Pyx_PyList_Append", self.PyObject_Append_func_type,
+ args=[cloned_obj, arg],
+ is_temp=True,
+ utility_code=load_c_utility("ListAppend"))
+ for arg in items[-2::-1]:
+ if not arg.is_simple():
+ arg = UtilNodes.LetRefNode(arg)
+ temps.append(arg)
+ new_node = ExprNodes.binop_node(
+ node.pos, '|',
+ ExprNodes.PythonCapiCallNode(
+ node.pos, "__Pyx_ListComp_Append", self.PyObject_Append_func_type,
+ args=[cloned_obj, arg], py_name="extend",
+ is_temp=True,
+ utility_code=load_c_utility("ListCompAppend")),
+ new_node,
+ type=PyrexTypes.c_returncode_type,
+ )
+ new_node.result_is_used = node.result_is_used
+ if cloned_obj is not obj:
+ temps.append(cloned_obj)
+ for temp in temps:
+ new_node = UtilNodes.EvalWithTempExprNode(temp, new_node)
+ new_node.result_is_used = node.result_is_used
+ return new_node
+
PyByteArray_Append_func_type = PyrexTypes.CFuncType(
PyrexTypes.c_returncode_type, [
PyrexTypes.CFuncTypeArg("bytearray", PyrexTypes.py_object_type, None),
@@ -2759,7 +3006,7 @@
if is_list:
type_name = 'List'
obj = obj.as_none_safe_node(
- "'NoneType' object has no attribute '%s'",
+ "'NoneType' object has no attribute '%.30s'",
error="PyExc_AttributeError",
format_args=['pop'])
else:
@@ -2889,21 +3136,41 @@
may_return_none=True,
utility_code=load_c_utility('dict_setdefault'))
- Pyx_PyInt_BinopInt_func_type = PyrexTypes.CFuncType(
+ PyDict_Pop_func_type = PyrexTypes.CFuncType(
PyrexTypes.py_object_type, [
- PyrexTypes.CFuncTypeArg("op1", PyrexTypes.py_object_type, None),
- PyrexTypes.CFuncTypeArg("op2", PyrexTypes.py_object_type, None),
- PyrexTypes.CFuncTypeArg("intval", PyrexTypes.c_long_type, None),
- PyrexTypes.CFuncTypeArg("inplace", PyrexTypes.c_bint_type, None),
- ])
+ PyrexTypes.CFuncTypeArg("dict", PyrexTypes.py_object_type, None),
+ PyrexTypes.CFuncTypeArg("key", PyrexTypes.py_object_type, None),
+ PyrexTypes.CFuncTypeArg("default", PyrexTypes.py_object_type, None),
+ ])
- Pyx_PyFloat_BinopInt_func_type = PyrexTypes.CFuncType(
- PyrexTypes.py_object_type, [
- PyrexTypes.CFuncTypeArg("op1", PyrexTypes.py_object_type, None),
- PyrexTypes.CFuncTypeArg("op2", PyrexTypes.py_object_type, None),
- PyrexTypes.CFuncTypeArg("fval", PyrexTypes.c_double_type, None),
- PyrexTypes.CFuncTypeArg("inplace", PyrexTypes.c_bint_type, None),
- ])
+ def _handle_simple_method_dict_pop(self, node, function, args, is_unbound_method):
+ """Replace dict.pop() by a call to _PyDict_Pop().
+ """
+ if len(args) == 2:
+ args.append(ExprNodes.NullNode(node.pos))
+ elif len(args) != 3:
+ self._error_wrong_arg_count('dict.pop', node, args, "2 or 3")
+ return node
+
+ return self._substitute_method_call(
+ node, function,
+ "__Pyx_PyDict_Pop", self.PyDict_Pop_func_type,
+ 'pop', is_unbound_method, args,
+ may_return_none=True,
+ utility_code=load_c_utility('py_dict_pop'))
+
+ Pyx_BinopInt_func_types = dict(
+ ((ctype, ret_type), PyrexTypes.CFuncType(
+ ret_type, [
+ PyrexTypes.CFuncTypeArg("op1", PyrexTypes.py_object_type, None),
+ PyrexTypes.CFuncTypeArg("op2", PyrexTypes.py_object_type, None),
+ PyrexTypes.CFuncTypeArg("cval", ctype, None),
+ PyrexTypes.CFuncTypeArg("inplace", PyrexTypes.c_bint_type, None),
+ PyrexTypes.CFuncTypeArg("zerodiv_check", PyrexTypes.c_bint_type, None),
+ ], exception_value=None if ret_type.is_pyobject else ret_type.exception_value))
+ for ctype in (PyrexTypes.c_long_type, PyrexTypes.c_double_type)
+ for ret_type in (PyrexTypes.py_object_type, PyrexTypes.c_bint_type)
+ )
def _handle_simple_method_object___add__(self, node, function, args, is_unbound_method):
return self._optimise_num_binop('Add', node, function, args, is_unbound_method)
@@ -2914,7 +3181,7 @@
def _handle_simple_method_object___eq__(self, node, function, args, is_unbound_method):
return self._optimise_num_binop('Eq', node, function, args, is_unbound_method)
- def _handle_simple_method_object___neq__(self, node, function, args, is_unbound_method):
+ def _handle_simple_method_object___ne__(self, node, function, args, is_unbound_method):
return self._optimise_num_binop('Ne', node, function, args, is_unbound_method)
def _handle_simple_method_object___and__(self, node, function, args, is_unbound_method):
@@ -2983,7 +3250,7 @@
def _handle_simple_method_float___eq__(self, node, function, args, is_unbound_method):
return self._optimise_num_binop('Eq', node, function, args, is_unbound_method)
- def _handle_simple_method_float___neq__(self, node, function, args, is_unbound_method):
+ def _handle_simple_method_float___ne__(self, node, function, args, is_unbound_method):
return self._optimise_num_binop('Ne', node, function, args, is_unbound_method)
def _optimise_num_binop(self, operator, node, function, args, is_unbound_method):
@@ -2992,7 +3259,12 @@
"""
if len(args) != 2:
return node
- if not node.type.is_pyobject:
+
+ if node.type.is_pyobject:
+ ret_type = PyrexTypes.py_object_type
+ elif node.type is PyrexTypes.c_bint_type and operator in ('Eq', 'Ne'):
+ ret_type = PyrexTypes.c_bint_type
+ else:
return node
# When adding IntNode/FloatNode to something else, assume other operand is also numeric.
@@ -3015,6 +3287,7 @@
return node
is_float = isinstance(numval, ExprNodes.FloatNode)
+ num_type = PyrexTypes.c_double_type if is_float else PyrexTypes.c_long_type
if is_float:
if operator not in ('Add', 'Subtract', 'Remainder', 'TrueDivide', 'Divide', 'Eq', 'Ne'):
return node
@@ -3022,27 +3295,48 @@
# mixed old-/new-style division is not currently optimised for integers
return node
elif abs(numval.constant_result) > 2**30:
+ # Cut off at an integer border that is still safe for all operations.
return node
+ if operator in ('TrueDivide', 'FloorDivide', 'Divide', 'Remainder'):
+ if args[1].constant_result == 0:
+ # Don't optimise division by 0. :)
+ return node
+
args = list(args)
args.append((ExprNodes.FloatNode if is_float else ExprNodes.IntNode)(
numval.pos, value=numval.value, constant_result=numval.constant_result,
- type=PyrexTypes.c_double_type if is_float else PyrexTypes.c_long_type))
+ type=num_type))
inplace = node.inplace if isinstance(node, ExprNodes.NumBinopNode) else False
args.append(ExprNodes.BoolNode(node.pos, value=inplace, constant_result=inplace))
+ if is_float or operator not in ('Eq', 'Ne'):
+ # "PyFloatBinop" and "PyIntBinop" take an additional "check for zero division" argument.
+ zerodivision_check = arg_order == 'CObj' and (
+ not node.cdivision if isinstance(node, ExprNodes.DivNode) else False)
+ args.append(ExprNodes.BoolNode(node.pos, value=zerodivision_check, constant_result=zerodivision_check))
utility_code = TempitaUtilityCode.load_cached(
- "PyFloatBinop" if is_float else "PyIntBinop", "Optimize.c",
- context=dict(op=operator, order=arg_order))
+ "PyFloatBinop" if is_float else "PyIntCompare" if operator in ('Eq', 'Ne') else "PyIntBinop",
+ "Optimize.c",
+ context=dict(op=operator, order=arg_order, ret_type=ret_type))
- return self._substitute_method_call(
- node, function, "__Pyx_Py%s_%s%s" % ('Float' if is_float else 'Int', operator, arg_order),
- self.Pyx_PyFloat_BinopInt_func_type if is_float else self.Pyx_PyInt_BinopInt_func_type,
+ call_node = self._substitute_method_call(
+ node, function,
+ "__Pyx_Py%s_%s%s%s" % (
+ 'Float' if is_float else 'Int',
+ '' if ret_type.is_pyobject else 'Bool',
+ operator,
+ arg_order),
+ self.Pyx_BinopInt_func_types[(num_type, ret_type)],
'__%s__' % operator[:3].lower(), is_unbound_method, args,
may_return_none=True,
with_none_check=False,
utility_code=utility_code)
+ if node.type.is_pyobject and not ret_type.is_pyobject:
+ call_node = ExprNodes.CoerceToPyTypeNode(call_node, self.current_env(), node.type)
+ return call_node
+
### unicode type methods
PyUnicode_uchar_predicate_func_type = PyrexTypes.CFuncType(
@@ -3449,7 +3743,7 @@
format_args=['decode', string_type.name])
else:
string_node = string_node.as_none_safe_node(
- "'NoneType' object has no attribute '%s'",
+ "'NoneType' object has no attribute '%.30s'",
error="PyExc_AttributeError",
format_args=['decode'])
elif not string_type.is_string and not string_type.is_cpp_string:
@@ -3638,18 +3932,8 @@
may_return_none=ExprNodes.PythonCapiCallNode.may_return_none,
with_none_check=True):
args = list(args)
- if with_none_check and args and not args[0].is_literal:
- self_arg = args[0]
- if is_unbound_method:
- self_arg = self_arg.as_none_safe_node(
- "descriptor '%s' requires a '%s' object but received a 'NoneType'",
- format_args=[attr_name, function.obj.name])
- else:
- self_arg = self_arg.as_none_safe_node(
- "'NoneType' object has no attribute '%s'",
- error = "PyExc_AttributeError",
- format_args = [attr_name])
- args[0] = self_arg
+ if with_none_check and args:
+ args[0] = self._wrap_self_arg(args[0], function, is_unbound_method, attr_name)
if is_temp is None:
is_temp = node.is_temp
return ExprNodes.PythonCapiCallNode(
@@ -3661,6 +3945,20 @@
result_is_used = node.result_is_used,
)
+ def _wrap_self_arg(self, self_arg, function, is_unbound_method, attr_name):
+ if self_arg.is_literal:
+ return self_arg
+ if is_unbound_method:
+ self_arg = self_arg.as_none_safe_node(
+ "descriptor '%s' requires a '%s' object but received a 'NoneType'",
+ format_args=[attr_name, self_arg.type.name])
+ else:
+ self_arg = self_arg.as_none_safe_node(
+ "'NoneType' object has no attribute '%{0}s'".format('.30' if len(attr_name) <= 30 else ''),
+ error="PyExc_AttributeError",
+ format_args=[attr_name])
+ return self_arg
+
def _inject_int_default_argument(self, node, args, arg_index, type, default_value):
assert len(args) >= arg_index
if len(args) == arg_index:
@@ -3929,8 +4227,42 @@
if isinstance(node.operand1, ExprNodes.IntNode) and \
node.operand2.is_sequence_constructor:
return self._calculate_constant_seq(node, node.operand2, node.operand1)
+ if node.operand1.is_string_literal:
+ return self._multiply_string(node, node.operand1, node.operand2)
+ elif node.operand2.is_string_literal:
+ return self._multiply_string(node, node.operand2, node.operand1)
return self.visit_BinopNode(node)
+ def _multiply_string(self, node, string_node, multiplier_node):
+ multiplier = multiplier_node.constant_result
+ if not isinstance(multiplier, _py_int_types):
+ return node
+ if not (node.has_constant_result() and isinstance(node.constant_result, _py_string_types)):
+ return node
+ if len(node.constant_result) > 256:
+ # Too long for static creation, leave it to runtime. (-> arbitrary limit)
+ return node
+
+ build_string = encoded_string
+ if isinstance(string_node, ExprNodes.BytesNode):
+ build_string = bytes_literal
+ elif isinstance(string_node, ExprNodes.StringNode):
+ if string_node.unicode_value is not None:
+ string_node.unicode_value = encoded_string(
+ string_node.unicode_value * multiplier,
+ string_node.unicode_value.encoding)
+ elif isinstance(string_node, ExprNodes.UnicodeNode):
+ if string_node.bytes_value is not None:
+ string_node.bytes_value = bytes_literal(
+ string_node.bytes_value * multiplier,
+ string_node.bytes_value.encoding)
+ else:
+ assert False, "unknown string node type: %s" % type(string_node)
+ string_node.constant_result = string_node.value = build_string(
+ string_node.value * multiplier,
+ string_node.value.encoding)
+ return string_node
+
def _calculate_constant_seq(self, node, sequence_node, factor):
if factor.constant_result != 1 and sequence_node.args:
if isinstance(factor.constant_result, _py_int_types) and factor.constant_result <= 0:
@@ -3950,12 +4282,99 @@
sequence_node.mult_factor = factor
return sequence_node
+ def visit_ModNode(self, node):
+ self.visitchildren(node)
+ if isinstance(node.operand1, ExprNodes.UnicodeNode) and isinstance(node.operand2, ExprNodes.TupleNode):
+ if not node.operand2.mult_factor:
+ fstring = self._build_fstring(node.operand1.pos, node.operand1.value, node.operand2.args)
+ if fstring is not None:
+ return fstring
+ return self.visit_BinopNode(node)
+
+ _parse_string_format_regex = (
+ u'(%(?:' # %...
+ u'(?:[0-9]+|[ ])?' # width (optional) or space prefix fill character (optional)
+ u'(?:[.][0-9]+)?' # precision (optional)
+ u')?.)' # format type (or something different for unsupported formats)
+ )
+
+ def _build_fstring(self, pos, ustring, format_args):
+ # Issues formatting warnings instead of errors since we really only catch a few errors by accident.
+ args = iter(format_args)
+ substrings = []
+ can_be_optimised = True
+ for s in re.split(self._parse_string_format_regex, ustring):
+ if not s:
+ continue
+ if s == u'%%':
+ substrings.append(ExprNodes.UnicodeNode(pos, value=EncodedString(u'%'), constant_result=u'%'))
+ continue
+ if s[0] != u'%':
+ if s[-1] == u'%':
+ warning(pos, "Incomplete format: '...%s'" % s[-3:], level=1)
+ can_be_optimised = False
+ substrings.append(ExprNodes.UnicodeNode(pos, value=EncodedString(s), constant_result=s))
+ continue
+ format_type = s[-1]
+ try:
+ arg = next(args)
+ except StopIteration:
+ warning(pos, "Too few arguments for format placeholders", level=1)
+ can_be_optimised = False
+ break
+ if arg.is_starred:
+ can_be_optimised = False
+ break
+ if format_type in u'asrfdoxX':
+ format_spec = s[1:]
+ if format_type in u'doxX' and u'.' in format_spec:
+ # Precision is not allowed for integers in format(), but ok in %-formatting.
+ can_be_optimised = False
+ elif format_type in u'ars':
+ format_spec = format_spec[:-1]
+ substrings.append(ExprNodes.FormattedValueNode(
+ arg.pos, value=arg,
+ conversion_char=format_type if format_type in u'ars' else None,
+ format_spec=ExprNodes.UnicodeNode(
+ pos, value=EncodedString(format_spec), constant_result=format_spec)
+ if format_spec else None,
+ ))
+ else:
+ # keep it simple for now ...
+ can_be_optimised = False
+ break
+
+ if not can_be_optimised:
+ # Print all warnings we can find before finally giving up here.
+ return None
+
+ try:
+ next(args)
+ except StopIteration: pass
+ else:
+ warning(pos, "Too many arguments for format placeholders", level=1)
+ return None
+
+ node = ExprNodes.JoinedStrNode(pos, values=substrings)
+ return self.visit_JoinedStrNode(node)
+
def visit_FormattedValueNode(self, node):
self.visitchildren(node)
+ conversion_char = node.conversion_char or 's'
if isinstance(node.format_spec, ExprNodes.UnicodeNode) and not node.format_spec.value:
node.format_spec = None
- if node.format_spec is None and node.conversion_char is None and isinstance(node.value, ExprNodes.UnicodeNode):
- return node.value
+ if node.format_spec is None and isinstance(node.value, ExprNodes.IntNode):
+ value = EncodedString(node.value.value)
+ if value.isdigit():
+ return ExprNodes.UnicodeNode(node.value.pos, value=value, constant_result=value)
+ if node.format_spec is None and conversion_char == 's':
+ value = None
+ if isinstance(node.value, ExprNodes.UnicodeNode):
+ value = node.value.value
+ elif isinstance(node.value, ExprNodes.StringNode):
+ value = node.value.unicode_value
+ if value is not None:
+ return ExprNodes.UnicodeNode(node.value.pos, value=value, constant_result=value)
return node
def visit_JoinedStrNode(self, node):
@@ -3973,7 +4392,8 @@
substrings = list(substrings)
unode = substrings[0]
if len(substrings) > 1:
- unode.value = EncodedString(u''.join(value.value for value in substrings))
+ value = EncodedString(u''.join(value.value for value in substrings))
+ unode = ExprNodes.UnicodeNode(unode.pos, value=value, constant_result=value)
# ignore empty Unicode strings
if unode.value:
values.append(unode)
@@ -3981,7 +4401,8 @@
values.extend(substrings)
if not values:
- node = ExprNodes.UnicodeNode(node.pos, value=EncodedString(''))
+ value = EncodedString('')
+ node = ExprNodes.UnicodeNode(node.pos, value=value, constant_result=value)
elif len(values) == 1:
node = values[0]
elif len(values) == 2:
@@ -4271,7 +4692,7 @@
visit_Node = Visitor.VisitorTransform.recurse_to_children
-class FinalOptimizePhase(Visitor.CythonTransform, Visitor.NodeRefCleanupMixin):
+class FinalOptimizePhase(Visitor.EnvTransform, Visitor.NodeRefCleanupMixin):
"""
This visitor handles several commuting optimizations, and is run
just before the C code generation phase.
@@ -4280,8 +4701,11 @@
- eliminate None assignment and refcounting for first assignment.
- isinstance -> typecheck for cdef types
- eliminate checks for None and/or types that became redundant after tree changes
+ - eliminate useless string formatting steps
- replace Python function calls that look like method calls by a faster PyMethodCallNode
"""
+ in_loop = False
+
def visit_SingleAssignmentNode(self, node):
"""Avoid redundant initialisation of local variables before their
first assignment.
@@ -4308,11 +4732,13 @@
function.type = function.entry.type
PyTypeObjectPtr = PyrexTypes.CPtrType(cython_scope.lookup('PyTypeObject').type)
node.args[1] = ExprNodes.CastNode(node.args[1], PyTypeObjectPtr)
- elif (self.current_directives.get("optimize.unpack_method_calls")
- and node.is_temp and function.type.is_pyobject):
+ elif (node.is_temp and function.type.is_pyobject and self.current_directives.get(
+ "optimize.unpack_method_calls_in_pyinit"
+ if not self.in_loop and self.current_env().is_module_scope
+ else "optimize.unpack_method_calls")):
# optimise simple Python methods calls
if isinstance(node.arg_tuple, ExprNodes.TupleNode) and not (
- node.arg_tuple.mult_factor or (node.arg_tuple.is_literal and node.arg_tuple.args)):
+ node.arg_tuple.mult_factor or (node.arg_tuple.is_literal and len(node.arg_tuple.args) > 1)):
# simple call, now exclude calls to objects that are definitely not methods
may_be_a_method = True
if function.type is Builtin.type_type:
@@ -4340,6 +4766,11 @@
node, function=function, arg_tuple=node.arg_tuple, type=node.type))
return node
+ def visit_NumPyMethodCallNode(self, node):
+ # Exclude from replacement above.
+ self.visitchildren(node)
+ return node
+
def visit_PyTypeTestNode(self, node):
"""Remove tests for alternatively allowed None values from
type tests when we know that the argument cannot be None
@@ -4360,6 +4791,16 @@
return node.arg
return node
+ def visit_LoopNode(self, node):
+ """Remember when we enter a loop as some expensive optimisations might still be worth it there.
+ """
+ old_val = self.in_loop
+ self.in_loop = True
+ self.visitchildren(node)
+ self.in_loop = old_val
+ return node
+
+
class ConsolidateOverflowCheck(Visitor.CythonTransform):
"""
This class facilitates the sharing of overflow checking among all nodes
diff -Nru cython-0.26.1/Cython/Compiler/Options.py cython-0.29.14/Cython/Compiler/Options.py
--- cython-0.26.1/Cython/Compiler/Options.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Options.py 2019-07-26 12:09:39.000000000 +0000
@@ -4,6 +4,7 @@
from __future__ import absolute_import
+
class ShouldBeFromDirective(object):
known_directives = []
@@ -28,113 +29,144 @@
"Illegal access of '%s' from Options module rather than directive '%s'"
% (self.options_name, self.directive_name))
-# Include docstrings.
+
+"""
+The members of this module are documented using autodata in
+Cython/docs/src/reference/compilation.rst.
+See http://www.sphinx-doc.org/en/master/ext/autodoc.html#directive-autoattribute
+for how autodata works.
+Descriptions of those members should start with a #:
+Donc forget to keep the docs in sync by removing and adding
+the members in both this file and the .rst file.
+"""
+
+#: Whether or not to include docstring in the Python extension. If False, the binary size
+#: will be smaller, but the ``__doc__`` attribute of any class or function will be an
+#: empty string.
docstrings = True
-# Embed the source code position in the docstrings of functions and classes.
+#: Embed the source code position in the docstrings of functions and classes.
embed_pos_in_docstring = False
-# Copy the original source code line by line into C code comments
-# in the generated code file to help with understanding the output.
+#: Copy the original source code line by line into C code comments
+#: in the generated code file to help with understanding the output.
+#: This is also required for coverage analysis.
emit_code_comments = True
-pre_import = None # undocumented
+# undocumented
+pre_import = None
-# Decref global variables in this module on exit for garbage collection.
-# 0: None, 1+: interned objects, 2+: cdef globals, 3+: types objects
-# Mostly for reducing noise in Valgrind, only executes at process exit
-# (when all memory will be reclaimed anyways).
+#: Decref global variables in each module on exit for garbage collection.
+#: 0: None, 1+: interned objects, 2+: cdef globals, 3+: types objects
+#: Mostly for reducing noise in Valgrind as it typically executes at process exit
+#: (when all memory will be reclaimed anyways).
+#: Note that directly or indirectly executed cleanup code that makes use of global
+#: variables or types may no longer be safe when enabling the respective level since
+#: there is no guaranteed order in which the (reference counted) objects will
+#: be cleaned up. The order can change due to live references and reference cycles.
generate_cleanup_code = False
-# Should tp_clear() set object fields to None instead of clearing them to NULL?
+#: Should tp_clear() set object fields to None instead of clearing them to NULL?
clear_to_none = True
-# Generate an annotated HTML version of the input source files.
+#: Generate an annotated HTML version of the input source files for debugging and optimisation purposes.
+#: This has the same effect as the ``annotate`` argument in :func:`cythonize`.
annotate = False
# When annotating source files in HTML, include coverage information from
# this file.
annotate_coverage_xml = None
-# This will abort the compilation on the first error occurred rather than trying
-# to keep going and printing further error messages.
+#: This will abort the compilation on the first error occurred rather than trying
+#: to keep going and printing further error messages.
fast_fail = False
-# Make all warnings into errors.
+#: Turn all warnings into errors.
warning_errors = False
-# Make unknown names an error. Python raises a NameError when
-# encountering unknown names at runtime, whereas this option makes
-# them a compile time error. If you want full Python compatibility,
-# you should disable this option and also 'cache_builtins'.
+#: Make unknown names an error. Python raises a NameError when
+#: encountering unknown names at runtime, whereas this option makes
+#: them a compile time error. If you want full Python compatibility,
+#: you should disable this option and also 'cache_builtins'.
error_on_unknown_names = True
-# Make uninitialized local variable reference a compile time error.
-# Python raises UnboundLocalError at runtime, whereas this option makes
-# them a compile time error. Note that this option affects only variables
-# of "python object" type.
+#: Make uninitialized local variable reference a compile time error.
+#: Python raises UnboundLocalError at runtime, whereas this option makes
+#: them a compile time error. Note that this option affects only variables
+#: of "python object" type.
error_on_uninitialized = True
-# This will convert statements of the form "for i in range(...)"
-# to "for i from ..." when i is a cdef'd integer type, and the direction
-# (i.e. sign of step) can be determined.
-# WARNING: This may change the semantics if the range causes assignment to
-# i to overflow. Specifically, if this option is set, an error will be
-# raised before the loop is entered, whereas without this option the loop
-# will execute until an overflowing value is encountered.
+#: This will convert statements of the form ``for i in range(...)``
+#: to ``for i from ...`` when ``i`` is a C integer type, and the direction
+#: (i.e. sign of step) can be determined.
+#: WARNING: This may change the semantics if the range causes assignment to
+#: i to overflow. Specifically, if this option is set, an error will be
+#: raised before the loop is entered, whereas without this option the loop
+#: will execute until an overflowing value is encountered.
convert_range = True
-# Perform lookups on builtin names only once, at module initialisation
-# time. This will prevent the module from getting imported if a
-# builtin name that it uses cannot be found during initialisation.
+#: Perform lookups on builtin names only once, at module initialisation
+#: time. This will prevent the module from getting imported if a
+#: builtin name that it uses cannot be found during initialisation.
+#: Default is True.
+#: Note that some legacy builtins are automatically remapped
+#: from their Python 2 names to their Python 3 names by Cython
+#: when building in Python 3.x,
+#: so that they do not get in the way even if this option is enabled.
cache_builtins = True
-# Generate branch prediction hints to speed up error handling etc.
+#: Generate branch prediction hints to speed up error handling etc.
gcc_branch_hints = True
-# Enable this to allow one to write your_module.foo = ... to overwrite the
-# definition if the cpdef function foo, at the cost of an extra dictionary
-# lookup on every call.
-# If this is false it generates only the Python wrapper and no override check.
+#: Enable this to allow one to write ``your_module.foo = ...`` to overwrite the
+#: definition if the cpdef function foo, at the cost of an extra dictionary
+#: lookup on every call.
+#: If this is false it generates only the Python wrapper and no override check.
lookup_module_cpdef = False
-# Whether or not to embed the Python interpreter, for use in making a
-# standalone executable or calling from external libraries.
-# This will provide a method which initialises the interpreter and
-# executes the body of this module.
+#: Whether or not to embed the Python interpreter, for use in making a
+#: standalone executable or calling from external libraries.
+#: This will provide a C function which initialises the interpreter and
+#: executes the body of this module.
+#: See `this demo `_
+#: for a concrete example.
+#: If true, the initialisation function is the C main() function, but
+#: this option can also be set to a non-empty string to provide a function name explicitly.
+#: Default is False.
embed = None
# In previous iterations of Cython, globals() gave the first non-Cython module
# globals in the call stack. Sage relies on this behavior for variable injection.
old_style_globals = ShouldBeFromDirective('old_style_globals')
-# Allows cimporting from a pyx file without a pxd file.
+#: Allows cimporting from a pyx file without a pxd file.
cimport_from_pyx = False
-# max # of dims for buffers -- set lower than number of dimensions in numpy, as
-# slices are passed by value and involve a lot of copying
+#: Maximum number of dimensions for buffers -- set lower than number of
+#: dimensions in numpy, as
+#: slices are passed by value and involve a lot of copying.
buffer_max_dims = 8
-# Number of function closure instances to keep in a freelist (0: no freelists)
+#: Number of function closure instances to keep in a freelist (0: no freelists)
closure_freelist_size = 8
def get_directive_defaults():
- # To add an item to this list, all accesses should be changed to use the new
- # directive, and the global option itself should be set to an instance of
- # ShouldBeFromDirective.
- for old_option in ShouldBeFromDirective.known_directives:
- value = globals().get(old_option.options_name)
- assert old_option.directive_name in _directive_defaults
- if not isinstance(value, ShouldBeFromDirective):
- if old_option.disallow:
- raise RuntimeError(
- "Option '%s' must be set from directive '%s'" % (
- old_option.option_name, old_option.directive_name))
- else:
- # Warn?
- _directive_defaults[old_option.directive_name] = value
- return _directive_defaults
+ # To add an item to this list, all accesses should be changed to use the new
+ # directive, and the global option itself should be set to an instance of
+ # ShouldBeFromDirective.
+ for old_option in ShouldBeFromDirective.known_directives:
+ value = globals().get(old_option.options_name)
+ assert old_option.directive_name in _directive_defaults
+ if not isinstance(value, ShouldBeFromDirective):
+ if old_option.disallow:
+ raise RuntimeError(
+ "Option '%s' must be set from directive '%s'" % (
+ old_option.option_name, old_option.directive_name))
+ else:
+ # Warn?
+ _directive_defaults[old_option.directive_name] = value
+ return _directive_defaults
# Declare compiler directives
_directive_defaults = {
@@ -142,37 +174,35 @@
'nonecheck' : False,
'initializedcheck' : True,
'embedsignature' : False,
- 'locals' : {},
'auto_cpdef': False,
'auto_pickle': None,
- 'cdivision': False, # was True before 0.12
+ 'cdivision': False, # was True before 0.12
'cdivision_warnings': False,
'overflowcheck': False,
'overflowcheck.fold': True,
'always_allow_keywords': False,
'allow_none_for_extension_args': True,
'wraparound' : True,
- 'ccomplex' : False, # use C99/C++ for complex types and arith
+ 'ccomplex' : False, # use C99/C++ for complex types and arith
'callspec' : "",
- 'final' : False,
- 'internal' : False,
+ 'nogil' : False,
'profile': False,
- 'no_gc_clear': False,
- 'no_gc': False,
'linetrace': False,
'emit_code_comments': True, # copy original source code into C code comments
- 'annotation_typing': False, # read type declarations from Python function annotations
+ 'annotation_typing': True, # read type declarations from Python function annotations
'infer_types': None,
'infer_types.verbose': False,
'autotestdict': True,
'autotestdict.cdef': False,
'autotestdict.all': False,
- 'language_level': 2,
- 'fast_getattr': False, # Undocumented until we come up with a better way to handle this everywhere.
- 'py2_import': False, # For backward compatibility of Cython's source code in Py3 source mode
+ 'language_level': None,
+ 'fast_getattr': False, # Undocumented until we come up with a better way to handle this everywhere.
+ 'py2_import': False, # For backward compatibility of Cython's source code in Py3 source mode
+ 'preliminary_late_includes_cy28': False, # Temporary directive in 0.28, to be removed in a later version (see GH#2079).
+ 'iterable_coroutine': False, # Make async coroutines backwards compatible with the old asyncio yield-from syntax.
'c_string_type': 'bytes',
'c_string_encoding': '',
- 'type_version_tag': True, # enables Py_TPFLAGS_HAVE_VERSION_TAG on extension types
+ 'type_version_tag': True, # enables Py_TPFLAGS_HAVE_VERSION_TAG on extension types
'unraisable_tracebacks': True,
'old_style_globals': False,
'np_pythran': False,
@@ -192,15 +222,16 @@
# optimizations
'optimize.inline_defnode_calls': True,
- 'optimize.unpack_method_calls': True, # increases code size when True
+ 'optimize.unpack_method_calls': True, # increases code size when True
+ 'optimize.unpack_method_calls_in_pyinit': False, # uselessly increases code size when True
'optimize.use_switch': True,
# remove unreachable code
'remove_unreachable': True,
# control flow debug directives
- 'control_flow.dot_output': "", # Graphviz output filename
- 'control_flow.dot_annotate_defs': False, # Annotate definitions
+ 'control_flow.dot_output': "", # Graphviz output filename
+ 'control_flow.dot_annotate_defs': False, # Annotate definitions
# test support
'test_assert_path_exists' : [],
@@ -208,7 +239,6 @@
# experimental, subject to change
'binding': None,
- 'freelist': 0,
'formal_grammar': False,
}
@@ -266,17 +296,23 @@
# Override types possibilities above, if needed
directive_types = {
+ 'language_level': str, # values can be None/2/3/'3str', where None == 2+warning
'auto_pickle': bool,
+ 'locals': dict,
'final' : bool, # final cdef classes and methods
+ 'nogil' : bool,
'internal' : bool, # cdef class visibility in the module dict
- 'infer_types' : bool, # values can be True/None/False
+ 'infer_types' : bool, # values can be True/None/False
'binding' : bool,
- 'cfunc' : None, # decorators do not take directive value
+ 'cfunc' : None, # decorators do not take directive value
'ccall' : None,
'inline' : None,
'staticmethod' : None,
'cclass' : None,
+ 'no_gc_clear' : bool,
+ 'no_gc' : bool,
'returns' : type,
+ 'exceptval': type, # actually (type, check=True/False), but has its own parser
'set_initial_path': str,
'freelist': int,
'c_string_type': one_of('bytes', 'bytearray', 'str', 'unicode'),
@@ -287,15 +323,22 @@
if key not in directive_types:
directive_types[key] = type(val)
-directive_scopes = { # defaults to available everywhere
+directive_scopes = { # defaults to available everywhere
# 'module', 'function', 'class', 'with statement'
'auto_pickle': ('module', 'cclass'),
'final' : ('cclass', 'function'),
+ 'nogil' : ('function', 'with statement'),
'inline' : ('function',),
+ 'cfunc' : ('function', 'with statement'),
+ 'ccall' : ('function', 'with statement'),
+ 'returns' : ('function',),
+ 'exceptval' : ('function',),
+ 'locals' : ('function',),
'staticmethod' : ('function',), # FIXME: analysis currently lacks more specific function scope
'no_gc_clear' : ('cclass',),
'no_gc' : ('cclass',),
'internal' : ('cclass',),
+ 'cclass' : ('class', 'cclass', 'with statement'),
'autotestdict' : ('module',),
'autotestdict.all' : ('module',),
'autotestdict.cdef' : ('module',),
@@ -315,6 +358,7 @@
'old_style_globals': ('module',),
'np_pythran': ('module',),
'fast_gil': ('module',),
+ 'iterable_coroutine': ('module', 'function'),
}
@@ -415,7 +459,7 @@
item = item.strip()
if not item:
continue
- if not '=' in item:
+ if '=' not in item:
raise ValueError('Expected "=" in option "%s"' % item)
name, value = [s.strip() for s in item.strip().split('=', 1)]
if name not in _directive_defaults:
@@ -433,3 +477,73 @@
parsed_value = parse_directive_value(name, value, relaxed_bool=relaxed_bool)
result[name] = parsed_value
return result
+
+
+def parse_variable_value(value):
+ """
+ Parses value as an option value for the given name and returns
+ the interpreted value.
+
+ >>> parse_variable_value('True')
+ True
+ >>> parse_variable_value('true')
+ 'true'
+ >>> parse_variable_value('us-ascii')
+ 'us-ascii'
+ >>> parse_variable_value('str')
+ 'str'
+ >>> parse_variable_value('123')
+ 123
+ >>> parse_variable_value('1.23')
+ 1.23
+
+ """
+ if value == "True":
+ return True
+ elif value == "False":
+ return False
+ elif value == "None":
+ return None
+ elif value.isdigit():
+ return int(value)
+ else:
+ try:
+ value = float(value)
+ except Exception:
+ # Not a float
+ pass
+ return value
+
+
+def parse_compile_time_env(s, current_settings=None):
+ """
+ Parses a comma-separated list of pragma options. Whitespace
+ is not considered.
+
+ >>> parse_compile_time_env(' ')
+ {}
+ >>> (parse_compile_time_env('HAVE_OPENMP=True') ==
+ ... {'HAVE_OPENMP': True})
+ True
+ >>> parse_compile_time_env(' asdf')
+ Traceback (most recent call last):
+ ...
+ ValueError: Expected "=" in option "asdf"
+ >>> parse_compile_time_env('NUM_THREADS=4') == {'NUM_THREADS': 4}
+ True
+ >>> parse_compile_time_env('unknown=anything') == {'unknown': 'anything'}
+ True
+ """
+ if current_settings is None:
+ result = {}
+ else:
+ result = current_settings
+ for item in s.split(','):
+ item = item.strip()
+ if not item:
+ continue
+ if '=' not in item:
+ raise ValueError('Expected "=" in option "%s"' % item)
+ name, value = [s.strip() for s in item.split('=', 1)]
+ result[name] = parse_variable_value(value)
+ return result
diff -Nru cython-0.26.1/Cython/Compiler/ParseTreeTransforms.pxd cython-0.29.14/Cython/Compiler/ParseTreeTransforms.pxd
--- cython-0.26.1/Cython/Compiler/ParseTreeTransforms.pxd 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/ParseTreeTransforms.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -7,9 +7,6 @@
CythonTransform, VisitorTransform, TreeVisitor,
ScopeTrackingTransform, EnvTransform)
-cdef class NameNodeCollector(TreeVisitor):
- cdef list name_nodes
-
cdef class SkipDeclarations: # (object):
pass
@@ -49,20 +46,33 @@
cdef set imported_names
cdef object scope
+@cython.final
cdef class YieldNodeCollector(TreeVisitor):
cdef public list yields
cdef public list returns
+ cdef public list finallys
+ cdef public list excepts
cdef public bint has_return_value
+ cdef public bint has_yield
+ cdef public bint has_await
+@cython.final
cdef class MarkClosureVisitor(CythonTransform):
cdef bint needs_closure
+@cython.final
cdef class CreateClosureClasses(CythonTransform):
cdef list path
cdef bint in_lambda
cdef module_scope
cdef generator_class
+ cdef create_class_from_scope(self, node, target_module_scope, inner_node=*)
+ cdef find_entries_used_in_closures(self, node)
+
+#cdef class InjectGilHandling(VisitorTransform, SkipDeclarations):
+# cdef bint nogil
+
cdef class GilCheck(VisitorTransform):
cdef list env_stack
cdef bint nogil
diff -Nru cython-0.26.1/Cython/Compiler/ParseTreeTransforms.py cython-0.29.14/Cython/Compiler/ParseTreeTransforms.py
--- cython-0.26.1/Cython/Compiler/ParseTreeTransforms.py 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/ParseTreeTransforms.py 2019-06-30 06:50:51.000000000 +0000
@@ -15,6 +15,7 @@
from . import Nodes
from . import Options
from . import Builtin
+from . import Errors
from .Visitor import VisitorTransform, TreeVisitor
from .Visitor import CythonTransform, EnvTransform, ScopeTrackingTransform
@@ -24,20 +25,6 @@
from .Errors import error, warning, CompileError, InternalError
from .Code import UtilityCode
-class NameNodeCollector(TreeVisitor):
- """Collect all NameNodes of a (sub-)tree in the ``name_nodes``
- attribute.
- """
- def __init__(self):
- super(NameNodeCollector, self).__init__()
- self.name_nodes = []
-
- def visit_NameNode(self, node):
- self.name_nodes.append(node)
-
- def visit_Node(self, node):
- self._visitchildren(node, None)
-
class SkipDeclarations(object):
"""
@@ -65,6 +52,7 @@
def visit_CStructOrUnionDefNode(self, node):
return node
+
class NormalizeTree(CythonTransform):
"""
This transform fixes up a few things after parsing
@@ -192,7 +180,7 @@
# unpack a lambda expression into the corresponding DefNode
collector = YieldNodeCollector()
collector.visitchildren(node.result_expr)
- if collector.yields or collector.awaits or isinstance(node.result_expr, ExprNodes.YieldExprNode):
+ if collector.has_yield or collector.has_await or isinstance(node.result_expr, ExprNodes.YieldExprNode):
body = Nodes.ExprStatNode(
node.result_expr.pos, expr=node.result_expr)
else:
@@ -208,11 +196,22 @@
def visit_GeneratorExpressionNode(self, node):
# unpack a generator expression into the corresponding DefNode
- node.def_node = Nodes.DefNode(node.pos, name=node.name,
- doc=None,
- args=[], star_arg=None,
- starstar_arg=None,
- body=node.loop)
+ collector = YieldNodeCollector()
+ collector.visitchildren(node.loop)
+ node.def_node = Nodes.DefNode(
+ node.pos, name=node.name, doc=None,
+ args=[], star_arg=None, starstar_arg=None,
+ body=node.loop, is_async_def=collector.has_await)
+ self.visitchildren(node)
+ return node
+
+ def visit_ComprehensionNode(self, node):
+ # enforce local scope also in Py2 for async generators (seriously, that's a Py3.6 feature...)
+ if not node.has_local_scope:
+ collector = YieldNodeCollector()
+ collector.visitchildren(node.loop)
+ if collector.has_await:
+ node.has_local_scope = True
self.visitchildren(node)
return node
@@ -600,9 +599,11 @@
else:
return node
-class TrackNumpyAttributes(CythonTransform, SkipDeclarations):
- def __init__(self, context):
- super(TrackNumpyAttributes, self).__init__(context)
+
+class TrackNumpyAttributes(VisitorTransform, SkipDeclarations):
+ # TODO: Make name handling as good as in InterpretCompilerDirectives() below - probably best to merge the two.
+ def __init__(self):
+ super(TrackNumpyAttributes, self).__init__()
self.numpy_module_names = set()
def visit_CImportStatNode(self, node):
@@ -612,11 +613,15 @@
def visit_AttributeNode(self, node):
self.visitchildren(node)
- if node.obj.is_name and node.obj.name in self.numpy_module_names:
+ obj = node.obj
+ if (obj.is_name and obj.name in self.numpy_module_names) or obj.is_numpy_attribute:
node.is_numpy_attribute = True
return node
-class InterpretCompilerDirectives(CythonTransform, SkipDeclarations):
+ visit_Node = VisitorTransform.recurse_to_children
+
+
+class InterpretCompilerDirectives(CythonTransform):
"""
After parsing, directives can be stored in a number of places:
- #cython-comments at the top of the file (stored in ModuleNode)
@@ -836,7 +841,14 @@
if node.name in self.cython_module_names:
node.is_cython_module = True
else:
- node.cython_attribute = self.directive_names.get(node.name)
+ directive = self.directive_names.get(node.name)
+ if directive is not None:
+ node.cython_attribute = directive
+ return node
+
+ def visit_NewExprNode(self, node):
+ self.visit(node.cppclass)
+ self.visitchildren(node)
return node
def try_to_parse_directives(self, node):
@@ -884,9 +896,26 @@
return None
def try_to_parse_directive(self, optname, args, kwds, pos):
- directivetype = Options.directive_types.get(optname)
if optname == 'np_pythran' and not self.context.cpp:
raise PostParseError(pos, 'The %s directive can only be used in C++ mode.' % optname)
+ elif optname == 'exceptval':
+ # default: exceptval(None, check=True)
+ arg_error = len(args) > 1
+ check = True
+ if kwds and kwds.key_value_pairs:
+ kw = kwds.key_value_pairs[0]
+ if (len(kwds.key_value_pairs) == 1 and
+ kw.key.is_string_literal and kw.key.value == 'check' and
+ isinstance(kw.value, ExprNodes.BoolNode)):
+ check = kw.value.value
+ else:
+ arg_error = True
+ if arg_error:
+ raise PostParseError(
+ pos, 'The exceptval directive takes 0 or 1 positional arguments and the boolean keyword "check"')
+ return ('exceptval', (args[0] if args else None, check))
+
+ directivetype = Options.directive_types.get(optname)
if len(args) == 1 and isinstance(args[0], ExprNodes.NoneNode):
return optname, Options.get_directive_defaults()[optname]
elif directivetype is bool:
@@ -916,7 +945,7 @@
'The %s directive takes no prepositional arguments' % optname)
return optname, dict([(key.value, value) for key, value in kwds.key_value_pairs])
elif directivetype is list:
- if kwds and len(kwds) != 0:
+ if kwds and len(kwds.key_value_pairs) != 0:
raise PostParseError(pos,
'The %s directive takes no keyword arguments' % optname)
return optname, [ str(arg.value) for arg in args ]
@@ -929,30 +958,33 @@
else:
assert False
- def visit_with_directives(self, body, directives):
- olddirectives = self.directives
- newdirectives = copy.copy(olddirectives)
- newdirectives.update(directives)
- self.directives = newdirectives
- assert isinstance(body, Nodes.StatListNode), body
- retbody = self.visit_Node(body)
- directive = Nodes.CompilerDirectivesNode(pos=retbody.pos, body=retbody,
- directives=newdirectives)
- self.directives = olddirectives
- return directive
+ def visit_with_directives(self, node, directives):
+ if not directives:
+ return self.visit_Node(node)
+
+ old_directives = self.directives
+ new_directives = dict(old_directives)
+ new_directives.update(directives)
+
+ if new_directives == old_directives:
+ return self.visit_Node(node)
+
+ self.directives = new_directives
+ retbody = self.visit_Node(node)
+ self.directives = old_directives
+
+ if not isinstance(retbody, Nodes.StatListNode):
+ retbody = Nodes.StatListNode(node.pos, stats=[retbody])
+ return Nodes.CompilerDirectivesNode(
+ pos=retbody.pos, body=retbody, directives=new_directives)
# Handle decorators
def visit_FuncDefNode(self, node):
directives = self._extract_directives(node, 'function')
- if not directives:
- return self.visit_Node(node)
- body = Nodes.StatListNode(node.pos, stats=[node])
- return self.visit_with_directives(body, directives)
+ return self.visit_with_directives(node, directives)
def visit_CVarDefNode(self, node):
directives = self._extract_directives(node, 'function')
- if not directives:
- return node
for name, value in directives.items():
if name == 'locals':
node.directive_locals = value
@@ -961,29 +993,19 @@
node.pos,
"Cdef functions can only take cython.locals(), "
"staticmethod, or final decorators, got %s." % name))
- body = Nodes.StatListNode(node.pos, stats=[node])
- return self.visit_with_directives(body, directives)
+ return self.visit_with_directives(node, directives)
def visit_CClassDefNode(self, node):
directives = self._extract_directives(node, 'cclass')
- if not directives:
- return self.visit_Node(node)
- body = Nodes.StatListNode(node.pos, stats=[node])
- return self.visit_with_directives(body, directives)
+ return self.visit_with_directives(node, directives)
def visit_CppClassNode(self, node):
directives = self._extract_directives(node, 'cppclass')
- if not directives:
- return self.visit_Node(node)
- body = Nodes.StatListNode(node.pos, stats=[node])
- return self.visit_with_directives(body, directives)
+ return self.visit_with_directives(node, directives)
def visit_PyClassDefNode(self, node):
directives = self._extract_directives(node, 'class')
- if not directives:
- return self.visit_Node(node)
- body = Nodes.StatListNode(node.pos, stats=[node])
- return self.visit_with_directives(body, directives)
+ return self.visit_with_directives(node, directives)
def _extract_directives(self, node, scope_name):
if not node.decorators:
@@ -992,7 +1014,8 @@
directives = []
realdecs = []
both = []
- for dec in node.decorators:
+ # Decorators coming first take precedence.
+ for dec in node.decorators[::-1]:
new_directives = self.try_to_parse_directives(dec.decorator)
if new_directives is not None:
for directive in new_directives:
@@ -1002,15 +1025,17 @@
directives.append(directive)
if directive[0] == 'staticmethod':
both.append(dec)
+ # Adapt scope type based on decorators that change it.
+ if directive[0] == 'cclass' and scope_name == 'class':
+ scope_name = 'cclass'
else:
realdecs.append(dec)
- if realdecs and isinstance(node, (Nodes.CFuncDefNode, Nodes.CClassDefNode, Nodes.CVarDefNode)):
+ if realdecs and (scope_name == 'cclass' or
+ isinstance(node, (Nodes.CFuncDefNode, Nodes.CClassDefNode, Nodes.CVarDefNode))):
raise PostParseError(realdecs[0].pos, "Cdef functions/classes cannot take arbitrary decorators.")
- else:
- node.decorators = realdecs + both
+ node.decorators = realdecs[::-1] + both[::-1]
# merge or override repeated directives
optdict = {}
- directives.reverse() # Decorators coming first take precedence
for directive in directives:
name, value = directive
if name in optdict:
@@ -1027,7 +1052,7 @@
optdict[name] = value
return optdict
- # Handle with statements
+ # Handle with-statements
def visit_WithStatNode(self, node):
directive_dict = {}
for directive in self.try_to_parse_directives(node.manager) or []:
@@ -1257,7 +1282,7 @@
pos, with_stat=node,
test_if_run=False,
args=excinfo_target,
- await=ExprNodes.AwaitExprNode(pos, arg=None) if is_async else None)),
+ await_expr=ExprNodes.AwaitExprNode(pos, arg=None) if is_async else None)),
body=Nodes.ReraiseStatNode(pos),
),
],
@@ -1279,7 +1304,7 @@
test_if_run=True,
args=ExprNodes.TupleNode(
pos, args=[ExprNodes.NoneNode(pos) for _ in range(3)]),
- await=ExprNodes.AwaitExprNode(pos, arg=None) if is_async else None)),
+ await_expr=ExprNodes.AwaitExprNode(pos, arg=None) if is_async else None)),
handle_error_case=False,
)
return node
@@ -1350,10 +1375,15 @@
elif decorator.is_attribute and decorator.obj.name in properties:
handler_name = self._map_property_attribute(decorator.attribute)
if handler_name:
- assert decorator.obj.name == node.name
- if len(node.decorators) > 1:
+ if decorator.obj.name != node.name:
+ # CPython does not generate an error or warning, but not something useful either.
+ error(decorator_node.pos,
+ "Mismatching property names, expected '%s', got '%s'" % (
+ decorator.obj.name, node.name))
+ elif len(node.decorators) > 1:
return self._reject_decorated_property(node, decorator_node)
- return self._add_to_property(properties, node, handler_name, decorator_node)
+ else:
+ return self._add_to_property(properties, node, handler_name, decorator_node)
# we clear node.decorators, so we need to set the
# is_staticmethod/is_classmethod attributes now
@@ -1500,6 +1530,13 @@
def visit_CClassDefNode(self, node):
if node.class_name not in self.module_scope.entries:
node.declare(self.module_scope)
+ # Expand fused methods of .pxd declared types to construct the final vtable order.
+ type = self.module_scope.entries[node.class_name].type
+ if type is not None and type.is_extension_type and not type.is_builtin_type and type.scope:
+ scope = type.scope
+ for entry in scope.cfunc_entries:
+ if entry.type and entry.type.is_fused:
+ entry.type.get_all_specialized_function_types()
return node
@@ -1670,6 +1707,8 @@
# so it can be pickled *after* self is memoized.
unpickle_func = TreeFragment(u"""
def %(unpickle_func_name)s(__pyx_type, long __pyx_checksum, __pyx_state):
+ cdef object __pyx_PickleError
+ cdef object __pyx_result
if __pyx_checksum != %(checksum)s:
from pickle import PickleError as __pyx_PickleError
raise __pyx_PickleError("Incompatible checksums (%%s vs %(checksum)s = (%(members)s))" %% __pyx_checksum)
@@ -1698,6 +1737,8 @@
pickle_func = TreeFragment(u"""
def __reduce_cython__(self):
+ cdef tuple state
+ cdef object _dict
cdef bint use_setstate
state = (%(members)s)
_dict = getattr(self, '__dict__', None)
@@ -1824,7 +1865,7 @@
def visit_FuncDefNode(self, node):
"""
- Analyse a function and its body, as that hasn't happend yet. Also
+ Analyse a function and its body, as that hasn't happened yet. Also
analyse the directive_locals set by @cython.locals().
Then, if we are a function with fused arguments, replace the function
@@ -1887,6 +1928,8 @@
binding = self.current_directives.get('binding')
rhs = ExprNodes.PyCFunctionNode.from_defnode(node, binding)
node.code_object = rhs.code_object
+ if node.is_generator:
+ node.gbody.code_object = node.code_object
if env.is_py_class_scope:
rhs.binding = True
@@ -2013,7 +2056,7 @@
# Some nodes are no longer needed after declaration
# analysis and can be dropped. The analysis was performed
- # on these nodes in a seperate recursive process from the
+ # on these nodes in a separate recursive process from the
# enclosing function or module, so we can simply drop them.
def visit_CDeclaratorNode(self, node):
# necessary to ensure that all CNameDeclaratorNodes are visited.
@@ -2287,6 +2330,7 @@
@cython.cclass
@cython.ccall
@cython.inline
+ @cython.nogil
"""
def visit_ModuleNode(self, node):
@@ -2306,22 +2350,42 @@
modifiers = []
if 'inline' in self.directives:
modifiers.append('inline')
+ nogil = self.directives.get('nogil')
+ except_val = self.directives.get('exceptval')
+ return_type_node = self.directives.get('returns')
+ if return_type_node is None and self.directives['annotation_typing']:
+ return_type_node = node.return_type_annotation
+ # for Python anntations, prefer safe exception handling by default
+ if return_type_node is not None and except_val is None:
+ except_val = (None, True) # except *
+ elif except_val is None:
+ # backward compatible default: no exception check
+ except_val = (None, False)
if 'ccall' in self.directives:
node = node.as_cfunction(
- overridable=True, returns=self.directives.get('returns'), modifiers=modifiers)
+ overridable=True, modifiers=modifiers, nogil=nogil,
+ returns=return_type_node, except_val=except_val)
return self.visit(node)
if 'cfunc' in self.directives:
if self.in_py_class:
error(node.pos, "cfunc directive is not allowed here")
else:
node = node.as_cfunction(
- overridable=False, returns=self.directives.get('returns'), modifiers=modifiers)
+ overridable=False, modifiers=modifiers, nogil=nogil,
+ returns=return_type_node, except_val=except_val)
return self.visit(node)
if 'inline' in modifiers:
error(node.pos, "Python functions cannot be declared 'inline'")
+ if nogil:
+ # TODO: turn this into a "with gil" declaration.
+ error(node.pos, "Python functions cannot be declared 'nogil'")
self.visitchildren(node)
return node
+ def visit_LambdaNode(self, node):
+ # No directives should modify lambdas or generator expressions (and also nothing in them).
+ return node
+
def visit_PyClassDefNode(self, node):
if 'cclass' in self.directives:
node = node.as_cclass()
@@ -2451,25 +2515,36 @@
node.else_clause = None
return node
+ def visit_TryFinallyStatNode(self, node):
+ self.visitchildren(node)
+ if node.finally_clause.is_terminator:
+ node.is_terminator = True
+ return node
+
class YieldNodeCollector(TreeVisitor):
def __init__(self):
super(YieldNodeCollector, self).__init__()
self.yields = []
- self.awaits = []
self.returns = []
+ self.finallys = []
+ self.excepts = []
self.has_return_value = False
+ self.has_yield = False
+ self.has_await = False
def visit_Node(self, node):
self.visitchildren(node)
def visit_YieldExprNode(self, node):
self.yields.append(node)
+ self.has_yield = True
self.visitchildren(node)
def visit_AwaitExprNode(self, node):
- self.awaits.append(node)
+ self.yields.append(node)
+ self.has_await = True
self.visitchildren(node)
def visit_ReturnStatNode(self, node):
@@ -2478,6 +2553,14 @@
self.has_return_value = True
self.returns.append(node)
+ def visit_TryFinallyStatNode(self, node):
+ self.visitchildren(node)
+ self.finallys.append(node)
+
+ def visit_TryExceptStatNode(self, node):
+ self.visitchildren(node)
+ self.excepts.append(node)
+
def visit_ClassDefNode(self, node):
pass
@@ -2513,28 +2596,36 @@
collector.visitchildren(node)
if node.is_async_def:
- if collector.yields:
- error(collector.yields[0].pos, "'yield' not allowed in async coroutines (use 'await')")
- yields = collector.awaits
- elif collector.yields:
- if collector.awaits:
- error(collector.yields[0].pos, "'await' not allowed in generators (use 'yield')")
- yields = collector.yields
+ coroutine_type = Nodes.AsyncDefNode
+ if collector.has_yield:
+ coroutine_type = Nodes.AsyncGenNode
+ for yield_expr in collector.yields + collector.returns:
+ yield_expr.in_async_gen = True
+ elif self.current_directives['iterable_coroutine']:
+ coroutine_type = Nodes.IterableAsyncDefNode
+ elif collector.has_await:
+ found = next(y for y in collector.yields if y.is_await)
+ error(found.pos, "'await' not allowed in generators (use 'yield')")
+ return node
+ elif collector.has_yield:
+ coroutine_type = Nodes.GeneratorDefNode
else:
return node
- for i, yield_expr in enumerate(yields, 1):
+ for i, yield_expr in enumerate(collector.yields, 1):
yield_expr.label_num = i
- for retnode in collector.returns:
+ for retnode in collector.returns + collector.finallys + collector.excepts:
retnode.in_generator = True
gbody = Nodes.GeneratorBodyDefNode(
- pos=node.pos, name=node.name, body=node.body)
- coroutine = (Nodes.AsyncDefNode if node.is_async_def else Nodes.GeneratorDefNode)(
+ pos=node.pos, name=node.name, body=node.body,
+ is_async_gen_body=node.is_async_def and collector.has_yield)
+ coroutine = coroutine_type(
pos=node.pos, name=node.name, args=node.args,
star_arg=node.star_arg, starstar_arg=node.starstar_arg,
doc=node.doc, decorators=node.decorators,
- gbody=gbody, lambda_name=node.lambda_name)
+ gbody=gbody, lambda_name=node.lambda_name,
+ return_type_annotation=node.return_type_annotation)
return coroutine
def visit_CFuncDefNode(self, node):
@@ -2576,24 +2667,28 @@
def find_entries_used_in_closures(self, node):
from_closure = []
in_closure = []
- for name, entry in node.local_scope.entries.items():
- if entry.from_closure:
- from_closure.append((name, entry))
- elif entry.in_closure:
- in_closure.append((name, entry))
+ for scope in node.local_scope.iter_local_scopes():
+ for name, entry in scope.entries.items():
+ if not name:
+ continue
+ if entry.from_closure:
+ from_closure.append((name, entry))
+ elif entry.in_closure:
+ in_closure.append((name, entry))
return from_closure, in_closure
def create_class_from_scope(self, node, target_module_scope, inner_node=None):
# move local variables into closure
if node.is_generator:
- for entry in node.local_scope.entries.values():
- if not entry.from_closure:
- entry.in_closure = True
+ for scope in node.local_scope.iter_local_scopes():
+ for entry in scope.entries.values():
+ if not (entry.from_closure or entry.is_pyglobal or entry.is_cglobal):
+ entry.in_closure = True
from_closure, in_closure = self.find_entries_used_in_closures(node)
in_closure.sort()
- # Now from the begining
+ # Now from the beginning
node.needs_closure = False
node.needs_outer_scope = False
@@ -2620,9 +2715,12 @@
node.needs_outer_scope = True
return
+ # entry.cname can contain periods (eg. a derived C method of a class).
+ # We want to use the cname as part of a C struct name, so we replace
+ # periods with double underscores.
as_name = '%s_%s' % (
target_module_scope.next_id(Naming.closure_class_prefix),
- node.entry.cname)
+ node.entry.cname.replace('.','__'))
entry = target_module_scope.declare_c_class(
name=as_name, pos=node.pos, defining=True,
@@ -2633,6 +2731,9 @@
class_scope = entry.type.scope
class_scope.is_internal = True
class_scope.is_closure_class_scope = True
+ if node.is_async_def or node.is_generator:
+ # Generators need their closure intact during cleanup as they resume to handle GeneratorExit
+ class_scope.directives['no_gc_clear'] = True
if Options.closure_freelist_size:
class_scope.directives['freelist'] = Options.closure_freelist_size
@@ -2645,11 +2746,12 @@
is_cdef=True)
node.needs_outer_scope = True
for name, entry in in_closure:
- closure_entry = class_scope.declare_var(pos=entry.pos,
- name=entry.name,
- cname=entry.cname,
- type=entry.type,
- is_cdef=True)
+ closure_entry = class_scope.declare_var(
+ pos=entry.pos,
+ name=entry.name if not entry.in_subscope else None,
+ cname=entry.cname,
+ type=entry.type,
+ is_cdef=True)
if entry.is_declared_generic:
closure_entry.is_declared_generic = 1
node.needs_closure = True
@@ -2691,6 +2793,60 @@
return node
+class InjectGilHandling(VisitorTransform, SkipDeclarations):
+ """
+ Allow certain Python operations inside of nogil blocks by implicitly acquiring the GIL.
+
+ Must run before the AnalyseDeclarationsTransform to make sure the GILStatNodes get
+ set up, parallel sections know that the GIL is acquired inside of them, etc.
+ """
+ def __call__(self, root):
+ self.nogil = False
+ return super(InjectGilHandling, self).__call__(root)
+
+ # special node handling
+
+ def visit_RaiseStatNode(self, node):
+ """Allow raising exceptions in nogil sections by wrapping them in a 'with gil' block."""
+ if self.nogil:
+ node = Nodes.GILStatNode(node.pos, state='gil', body=node)
+ return node
+
+ # further candidates:
+ # def visit_AssertStatNode(self, node):
+ # def visit_ReraiseStatNode(self, node):
+
+ # nogil tracking
+
+ def visit_GILStatNode(self, node):
+ was_nogil = self.nogil
+ self.nogil = (node.state == 'nogil')
+ self.visitchildren(node)
+ self.nogil = was_nogil
+ return node
+
+ def visit_CFuncDefNode(self, node):
+ was_nogil = self.nogil
+ if isinstance(node.declarator, Nodes.CFuncDeclaratorNode):
+ self.nogil = node.declarator.nogil and not node.declarator.with_gil
+ self.visitchildren(node)
+ self.nogil = was_nogil
+ return node
+
+ def visit_ParallelRangeNode(self, node):
+ was_nogil = self.nogil
+ self.nogil = node.nogil
+ self.visitchildren(node)
+ self.nogil = was_nogil
+ return node
+
+ def visit_ExprNode(self, node):
+ # No special GIL handling inside of expressions for now.
+ return node
+
+ visit_Node = VisitorTransform.recurse_to_children
+
+
class GilCheck(VisitorTransform):
"""
Call `node.gil_check(env)` on each node to make sure we hold the
@@ -2710,24 +2866,33 @@
self.nogil_declarator_only = False
return super(GilCheck, self).__call__(root)
+ def _visit_scoped_children(self, node, gil_state):
+ was_nogil = self.nogil
+ outer_attrs = node.outer_attrs
+ if outer_attrs and len(self.env_stack) > 1:
+ self.nogil = self.env_stack[-2].nogil
+ self.visitchildren(node, outer_attrs)
+
+ self.nogil = gil_state
+ self.visitchildren(node, exclude=outer_attrs)
+ self.nogil = was_nogil
+
def visit_FuncDefNode(self, node):
self.env_stack.append(node.local_scope)
- was_nogil = self.nogil
- self.nogil = node.local_scope.nogil
+ inner_nogil = node.local_scope.nogil
- if self.nogil:
+ if inner_nogil:
self.nogil_declarator_only = True
- if self.nogil and node.nogil_check:
+ if inner_nogil and node.nogil_check:
node.nogil_check(node.local_scope)
- self.visitchildren(node)
+ self._visit_scoped_children(node, inner_nogil)
# This cannot be nested, so it doesn't need backup/restore
self.nogil_declarator_only = False
self.env_stack.pop()
- self.nogil = was_nogil
return node
def visit_GILStatNode(self, node):
@@ -2735,9 +2900,9 @@
node.nogil_check()
was_nogil = self.nogil
- self.nogil = (node.state == 'nogil')
+ is_nogil = (node.state == 'nogil')
- if was_nogil == self.nogil and not self.nogil_declarator_only:
+ if was_nogil == is_nogil and not self.nogil_declarator_only:
if not was_nogil:
error(node.pos, "Trying to acquire the GIL while it is "
"already held.")
@@ -2750,8 +2915,7 @@
# which is wrapped in a StatListNode. Just unpack that.
node.finally_clause, = node.finally_clause.stats
- self.visitchildren(node)
- self.nogil = was_nogil
+ self._visit_scoped_children(node, is_nogil)
return node
def visit_ParallelRangeNode(self, node):
@@ -2798,8 +2962,12 @@
def visit_Node(self, node):
if self.env_stack and self.nogil and node.nogil_check:
node.nogil_check(self.env_stack[-1])
- self.visitchildren(node)
- node.in_nogil_context = self.nogil
+ if node.outer_attrs:
+ self._visit_scoped_children(node, self.nogil)
+ else:
+ self.visitchildren(node)
+ if self.nogil:
+ node.in_nogil_context = True
return node
@@ -3068,8 +3236,9 @@
return self.transform(node)
def visit_PrimaryCmpNode(self, node):
- type1 = node.operand1.analyse_as_type(self.local_scope)
- type2 = node.operand2.analyse_as_type(self.local_scope)
+ with Errors.local_errors(ignore=True):
+ type1 = node.operand1.analyse_as_type(self.local_scope)
+ type2 = node.operand2.analyse_as_type(self.local_scope)
if type1 and type2:
false_node = ExprNodes.BoolNode(node.pos, value=False)
diff -Nru cython-0.26.1/Cython/Compiler/Parsing.pxd cython-0.29.14/Cython/Compiler/Parsing.pxd
--- cython-0.26.1/Cython/Compiler/Parsing.pxd 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Parsing.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -182,16 +182,17 @@
cdef p_c_func_or_var_declaration(PyrexScanner s, pos, ctx)
cdef p_ctypedef_statement(PyrexScanner s, ctx)
cdef p_decorators(PyrexScanner s)
+cdef _reject_cdef_modifier_in_py(PyrexScanner s, name)
cdef p_def_statement(PyrexScanner s, list decorators=*, bint is_async_def=*)
cdef p_varargslist(PyrexScanner s, terminator=*, bint annotated = *)
cdef p_py_arg_decl(PyrexScanner s, bint annotated = *)
cdef p_class_statement(PyrexScanner s, decorators)
cdef p_c_class_definition(PyrexScanner s, pos, ctx)
-cdef p_c_class_options(PyrexScanner s)
+cdef tuple p_c_class_options(PyrexScanner s)
cdef p_property_decl(PyrexScanner s)
cdef p_doc_string(PyrexScanner s)
cdef p_ignorable_statement(PyrexScanner s)
-cdef p_compiler_directive_comments(PyrexScanner s)
+cdef dict p_compiler_directive_comments(PyrexScanner s)
cdef p_template_definition(PyrexScanner s)
cdef p_cpp_class_definition(PyrexScanner s, pos, ctx)
cdef p_cpp_class_attribute(PyrexScanner s, ctx)
diff -Nru cython-0.26.1/Cython/Compiler/Parsing.py cython-0.29.14/Cython/Compiler/Parsing.py
--- cython-0.26.1/Cython/Compiler/Parsing.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Parsing.py 2019-05-27 19:37:21.000000000 +0000
@@ -11,9 +11,10 @@
bytes_literal=object, StringEncoding=object,
FileSourceDescriptor=object, lookup_unicodechar=object, unicode_category=object,
Future=object, Options=object, error=object, warning=object,
- Builtin=object, ModuleNode=object, Utils=object,
- re=object, sys=object, _parse_escape_sequences=object, _unicode=object, _bytes=object,
- partial=object, reduce=object, _IS_PY3=cython.bint, _IS_2BYTE_UNICODE=cython.bint)
+ Builtin=object, ModuleNode=object, Utils=object, _unicode=object, _bytes=object,
+ re=object, sys=object, _parse_escape_sequences=object, _parse_escape_sequences_raw=object,
+ partial=object, reduce=object, _IS_PY3=cython.bint, _IS_2BYTE_UNICODE=cython.bint,
+ _CDEF_MODIFIERS=tuple)
from io import StringIO
import re
@@ -35,6 +36,7 @@
_IS_PY3 = sys.version_info[0] >= 3
_IS_2BYTE_UNICODE = sys.maxunicode == 0xffff
+_CDEF_MODIFIERS = ('inline', 'nogil', 'api')
class Ctx(object):
@@ -501,7 +503,7 @@
break
s.next()
- if s.sy == 'for':
+ if s.sy in ('for', 'async'):
if not keyword_args and not last_was_tuple_unpack:
if len(positional_args) == 1 and len(positional_args[0]) == 1:
positional_args = [[p_genexp(s, positional_args[0][0])]]
@@ -703,17 +705,18 @@
s.error("invalid string kind '%s'" % kind)
elif sy == 'IDENT':
name = s.systring
- s.next()
if name == "None":
- return ExprNodes.NoneNode(pos)
+ result = ExprNodes.NoneNode(pos)
elif name == "True":
- return ExprNodes.BoolNode(pos, value=True)
+ result = ExprNodes.BoolNode(pos, value=True)
elif name == "False":
- return ExprNodes.BoolNode(pos, value=False)
+ result = ExprNodes.BoolNode(pos, value=False)
elif name == "NULL" and not s.in_python_file:
- return ExprNodes.NullNode(pos)
+ result = ExprNodes.NullNode(pos)
else:
- return p_name(s, name)
+ result = p_name(s, name)
+ s.next()
+ return result
else:
s.error("Expected an identifier or literal")
@@ -955,9 +958,10 @@
error(pos, u"invalid character literal: %r" % bytes_value)
else:
bytes_value, unicode_value = chars.getstrings()
- if is_python3_source and has_non_ascii_literal_characters:
+ if (has_non_ascii_literal_characters
+ and is_python3_source and Future.unicode_literals in s.context.future_directives):
# Python 3 forbids literal non-ASCII characters in byte strings
- if kind not in ('u', 'f'):
+ if kind == 'b':
s.error("bytes can only contain ASCII literal characters.", pos=pos)
bytes_value = None
if kind == 'f':
@@ -1012,22 +1016,25 @@
builder.append(escape_sequence)
-_parse_escape_sequences = re.compile(
+_parse_escape_sequences_raw, _parse_escape_sequences = [re.compile((
# escape sequences:
- br'(\\(?:'
- br'[\\abfnrtv"\'{]|'
- br'[0-7]{2,3}|'
- br'N\{[^}]*\}|'
- br'x[0-9a-fA-F]{2}|'
- br'u[0-9a-fA-F]{4}|'
- br'U[0-9a-fA-F]{8}|'
- br'[NuU]|' # detect invalid escape sequences that do not match above
+ br'(\\(?:' +
+ (br'\\?' if is_raw else (
+ br'[\\abfnrtv"\'{]|'
+ br'[0-7]{2,3}|'
+ br'N\{[^}]*\}|'
+ br'x[0-9a-fA-F]{2}|'
+ br'u[0-9a-fA-F]{4}|'
+ br'U[0-9a-fA-F]{8}|'
+ br'[NxuU]|' # detect invalid escape sequences that do not match above
+ )) +
br')?|'
# non-escape sequences:
br'\{\{?|'
br'\}\}?|'
- br'[^\\{}]+)'.decode('us-ascii')
-).match
+ br'[^\\{}]+)'
+ ).decode('us-ascii')).match
+ for is_raw in (True, False)]
def p_f_string(s, unicode_value, pos, is_raw):
@@ -1037,13 +1044,15 @@
next_start = 0
size = len(unicode_value)
builder = StringEncoding.UnicodeLiteralBuilder()
+ error_pos = list(pos) # [src, line, column]
+ _parse_seq = _parse_escape_sequences_raw if is_raw else _parse_escape_sequences
while next_start < size:
end = next_start
- match = _parse_escape_sequences(unicode_value, next_start)
+ error_pos[2] = pos[2] + end # FIXME: handle newlines in string
+ match = _parse_seq(unicode_value, next_start)
if match is None:
- error_pos = (pos[0], pos[1] + end, pos[2]) # FIXME: handle newlines in string
- error(error_pos, "Invalid escape sequence")
+ error(tuple(error_pos), "Invalid escape sequence")
next_start = match.end()
part = match.group()
@@ -1067,8 +1076,7 @@
if part == '}}':
builder.append('}')
else:
- error_pos = (pos[0], pos[1] + end, pos[2]) # FIXME: handle newlines in string
- s.error("f-string: single '}' is not allowed", pos=error_pos)
+ s.error("f-string: single '}' is not allowed", pos=tuple(error_pos))
else:
builder.append(part)
@@ -1133,12 +1141,12 @@
expr_pos = (pos[0], pos[1], pos[2] + starting_index + 2) # TODO: find exact code position (concat, multi-line, ...)
if not expr_str.strip():
- error(pos, "empty expression not allowed in f-string")
+ error(expr_pos, "empty expression not allowed in f-string")
if terminal_char == '!':
i += 1
if i + 2 > size:
- error(pos, "invalid conversion char at end of string")
+ error(expr_pos, "invalid conversion char at end of string")
else:
conversion_char = unicode_value[i]
i += 1
@@ -1151,7 +1159,7 @@
start_format_spec = i + 1
while True:
if i >= size:
- s.error("missing '}' in format specifier")
+ s.error("missing '}' in format specifier", pos=expr_pos)
c = unicode_value[i]
if not in_triple_quotes and not in_string:
if c == '{':
@@ -1196,7 +1204,7 @@
# list_display ::= "[" [listmaker] "]"
# listmaker ::= (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] )
# comp_iter ::= comp_for | comp_if
-# comp_for ::= "for" expression_list "in" testlist [comp_iter]
+# comp_for ::= ["async"] "for" expression_list "in" testlist [comp_iter]
# comp_if ::= "if" test [comp_iter]
def p_list_maker(s):
@@ -1208,7 +1216,7 @@
return ExprNodes.ListNode(pos, args=[])
expr = p_test_or_starred_expr(s)
- if s.sy == 'for':
+ if s.sy in ('for', 'async'):
if expr.is_starred:
s.error("iterable unpacking cannot be used in comprehension")
append = ExprNodes.ComprehensionAppendNode(pos, expr=expr)
@@ -1230,7 +1238,7 @@
def p_comp_iter(s, body):
- if s.sy == 'for':
+ if s.sy in ('for', 'async'):
return p_comp_for(s, body)
elif s.sy == 'if':
return p_comp_if(s, body)
@@ -1239,11 +1247,17 @@
return body
def p_comp_for(s, body):
- # s.sy == 'for'
pos = s.position()
- s.next()
- kw = p_for_bounds(s, allow_testlist=False)
- kw.update(else_clause = None, body = p_comp_iter(s, body))
+ # [async] for ...
+ is_async = False
+ if s.sy == 'async':
+ is_async = True
+ s.next()
+
+ # s.sy == 'for'
+ s.expect('for')
+ kw = p_for_bounds(s, allow_testlist=False, is_async=is_async)
+ kw.update(else_clause=None, body=p_comp_iter(s, body), is_async=is_async)
return Nodes.ForStatNode(pos, **kw)
def p_comp_if(s, body):
@@ -1311,7 +1325,7 @@
else:
break
- if s.sy == 'for':
+ if s.sy in ('for', 'async'):
# dict/set comprehension
if len(parts) == 1 and isinstance(parts[0], list) and len(parts[0]) == 1:
item = parts[0][0]
@@ -1441,13 +1455,13 @@
s.next()
exprs = p_test_or_starred_expr_list(s, expr)
return ExprNodes.TupleNode(pos, args = exprs)
- elif s.sy == 'for':
+ elif s.sy in ('for', 'async'):
return p_genexp(s, expr)
else:
return expr
def p_genexp(s, expr):
- # s.sy == 'for'
+ # s.sy == 'async' | 'for'
loop = p_comp_for(s, Nodes.ExprStatNode(
expr.pos, expr = ExprNodes.YieldExprNode(expr.pos, arg=expr)))
return ExprNodes.GeneratorExpressionNode(expr.pos, loop=loop)
@@ -1478,13 +1492,17 @@
def p_expression_or_assignment(s):
- expr_list = [p_testlist_star_expr(s)]
- if s.sy == '=' and expr_list[0].is_starred:
+ expr = p_testlist_star_expr(s)
+ if s.sy == ':' and (expr.is_name or expr.is_subscript or expr.is_attribute):
+ s.next()
+ expr.annotation = p_test(s)
+ if s.sy == '=' and expr.is_starred:
# This is a common enough error to make when learning Cython to let
# it fail as early as possible and give a very clear error message.
s.error("a starred assignment target must be in a list or tuple"
" - maybe you meant to use an index assignment: var[0] = ...",
- pos=expr_list[0].pos)
+ pos=expr.pos)
+ expr_list = [expr]
while s.sy == '=':
s.next()
if s.sy == 'yield':
@@ -2143,7 +2161,14 @@
stat = stats[0]
else:
stat = Nodes.StatListNode(pos, stats = stats)
+
+ if s.sy not in ('NEWLINE', 'EOF'):
+ # provide a better error message for users who accidentally write Cython code in .py files
+ if isinstance(stat, Nodes.ExprStatNode):
+ if stat.expr.is_name and stat.expr.name == 'cdef':
+ s.error("The 'cdef' keyword is only allowed in Cython files (pyx/pxi/pxd)", pos)
s.expect_newline("Syntax error in simple statement list")
+
return stat
def p_compile_time_expr(s):
@@ -2160,9 +2185,10 @@
name = p_ident(s)
s.expect('=')
expr = p_compile_time_expr(s)
- value = expr.compile_time_value(denv)
- #print "p_DEF_statement: %s = %r" % (name, value) ###
- denv.declare(name, value)
+ if s.compile_time_eval:
+ value = expr.compile_time_value(denv)
+ #print "p_DEF_statement: %s = %r" % (name, value) ###
+ denv.declare(name, value)
s.expect_newline("Expected a newline", ignore_semicolon=True)
return Nodes.PassStatNode(pos)
@@ -2458,9 +2484,12 @@
error(pos, "Expected an identifier, found '%s'" % s.sy)
if s.systring == 'const':
s.next()
- base_type = p_c_base_type(s,
- self_flag = self_flag, nonempty = nonempty, templates = templates)
- return Nodes.CConstTypeNode(pos, base_type = base_type)
+ base_type = p_c_base_type(s, self_flag=self_flag, nonempty=nonempty, templates=templates)
+ if isinstance(base_type, Nodes.MemoryViewSliceTypeNode):
+ # reverse order to avoid having to write "(const int)[:]"
+ base_type.base_type_node = Nodes.CConstTypeNode(pos, base_type=base_type.base_type_node)
+ return base_type
+ return Nodes.CConstTypeNode(pos, base_type=base_type)
if looking_at_base_type(s):
#print "p_c_simple_base_type: looking_at_base_type at", s.position()
is_basic = 1
@@ -2687,6 +2716,7 @@
"ssize_t" : (2, 0),
"size_t" : (0, 0),
"ptrdiff_t" : (2, 0),
+ "Py_tss_t" : (1, 0),
})
sign_and_longness_words = cython.declare(
@@ -2908,6 +2938,9 @@
name = s.systring
s.next()
exc_val = p_name(s, name)
+ elif s.sy == '*':
+ exc_val = ExprNodes.CharNode(s.position(), value=u'*')
+ s.next()
else:
if s.sy == '?':
exc_check = 1
@@ -3058,9 +3091,13 @@
ctx.namespace = p_string_literal(s, 'u')[2]
if p_nogil(s):
ctx.nogil = 1
- body = p_suite(s, ctx)
+
+ # Use "docstring" as verbatim string to include
+ verbatim_include, body = p_suite_with_docstring(s, ctx, True)
+
return Nodes.CDefExternNode(pos,
include_file = include_file,
+ verbatim_include = verbatim_include,
body = body,
namespace = ctx.namespace)
@@ -3224,6 +3261,14 @@
is_const_method = 1
else:
is_const_method = 0
+ if s.sy == '->':
+ # Special enough to give a better error message and keep going.
+ s.error(
+ "Return type annotation is not allowed in cdef/cpdef signatures. "
+ "Please define it before the function name, as in C signatures.",
+ fatal=False)
+ s.next()
+ p_test(s) # Keep going, but ignore result.
if s.sy == ':':
if ctx.level not in ('module', 'c_class', 'module_pxd', 'c_class_pxd', 'cpp_class') and not ctx.templates:
s.error("C function definition not allowed here")
@@ -3311,6 +3356,16 @@
return decorators
+def _reject_cdef_modifier_in_py(s, name):
+ """Step over incorrectly placed cdef modifiers (@see _CDEF_MODIFIERS) to provide a good error message for them.
+ """
+ if s.sy == 'IDENT' and name in _CDEF_MODIFIERS:
+ # Special enough to provide a good error message.
+ s.error("Cannot use cdef modifier '%s' in Python function signature. Use a decorator instead." % name, fatal=False)
+ return p_ident(s) # Keep going, in case there are other errors.
+ return name
+
+
def p_def_statement(s, decorators=None, is_async_def=False):
# s.sy == 'def'
pos = s.position()
@@ -3318,16 +3373,20 @@
if is_async_def:
s.enter_async()
s.next()
- name = p_ident(s)
- s.expect('(')
+ name = _reject_cdef_modifier_in_py(s, p_ident(s))
+ s.expect(
+ '(',
+ "Expected '(', found '%s'. Did you use cdef syntax in a Python declaration? "
+ "Use decorators and Python type annotations instead." % (
+ s.systring if s.sy == 'IDENT' else s.sy))
args, star_arg, starstar_arg = p_varargslist(s, terminator=')')
s.expect(')')
- if p_nogil(s):
- error(pos, "Python function cannot be declared nogil")
+ _reject_cdef_modifier_in_py(s, s.systring)
return_type_annotation = None
if s.sy == '->':
s.next()
return_type_annotation = p_test(s)
+ _reject_cdef_modifier_in_py(s, s.systring)
doc, body = p_suite_with_docstring(s, Ctx(level='function'))
if is_async_def:
@@ -3412,23 +3471,20 @@
as_name = class_name
objstruct_name = None
typeobj_name = None
- base_class_module = None
- base_class_name = None
+ bases = None
+ check_size = None
if s.sy == '(':
- s.next()
- base_class_path = [p_ident(s)]
- while s.sy == '.':
- s.next()
- base_class_path.append(p_ident(s))
- if s.sy == ',':
- s.error("C class may only have one base class", fatal=False)
- s.expect(')')
- base_class_module = ".".join(base_class_path[:-1])
- base_class_name = base_class_path[-1]
+ positional_args, keyword_args = p_call_parse_args(s, allow_genexp=False)
+ if keyword_args:
+ s.error("C classes cannot take keyword bases.")
+ bases, _ = p_call_build_packed_args(pos, positional_args, keyword_args)
+ if bases is None:
+ bases = ExprNodes.TupleNode(pos, args=[])
+
if s.sy == '[':
if ctx.visibility not in ('public', 'extern') and not ctx.api:
error(s.position(), "Name options only allowed for 'public', 'api', or 'extern' C class")
- objstruct_name, typeobj_name = p_c_class_options(s)
+ objstruct_name, typeobj_name, check_size = p_c_class_options(s)
if s.sy == ':':
if ctx.level == 'module_pxd':
body_level = 'c_class_pxd'
@@ -3464,17 +3520,19 @@
module_name = ".".join(module_path),
class_name = class_name,
as_name = as_name,
- base_class_module = base_class_module,
- base_class_name = base_class_name,
+ bases = bases,
objstruct_name = objstruct_name,
typeobj_name = typeobj_name,
+ check_size = check_size,
in_pxd = ctx.level == 'module_pxd',
doc = doc,
body = body)
+
def p_c_class_options(s):
objstruct_name = None
typeobj_name = None
+ check_size = None
s.expect('[')
while 1:
if s.sy != 'IDENT':
@@ -3485,11 +3543,16 @@
elif s.systring == 'type':
s.next()
typeobj_name = p_ident(s)
+ elif s.systring == 'check_size':
+ s.next()
+ check_size = p_ident(s)
+ if check_size not in ('ignore', 'warn', 'error'):
+ s.error("Expected one of ignore, warn or error, found %r" % check_size)
if s.sy != ',':
break
s.next()
- s.expect(']', "Expected 'object' or 'type'")
- return objstruct_name, typeobj_name
+ s.expect(']', "Expected 'object', 'type' or 'check_size'")
+ return objstruct_name, typeobj_name, check_size
def p_property_decl(s):
@@ -3568,31 +3631,60 @@
repr(s.sy), repr(s.systring)))
return body
+
_match_compiler_directive_comment = cython.declare(object, re.compile(
r"^#\s*cython\s*:\s*((\w|[.])+\s*=.*)$").match)
+
def p_compiler_directive_comments(s):
result = {}
while s.sy == 'commentline':
+ pos = s.position()
m = _match_compiler_directive_comment(s.systring)
if m:
- directives = m.group(1).strip()
+ directives_string = m.group(1).strip()
try:
- result.update(Options.parse_directive_list(
- directives, ignore_unknown=True))
+ new_directives = Options.parse_directive_list(directives_string, ignore_unknown=True)
except ValueError as e:
s.error(e.args[0], fatal=False)
+ s.next()
+ continue
+
+ for name in new_directives:
+ if name not in result:
+ pass
+ elif new_directives[name] == result[name]:
+ warning(pos, "Duplicate directive found: %s" % (name,))
+ else:
+ s.error("Conflicting settings found for top-level directive %s: %r and %r" % (
+ name, result[name], new_directives[name]), pos=pos)
+
+ if 'language_level' in new_directives:
+ # Make sure we apply the language level already to the first token that follows the comments.
+ s.context.set_language_level(new_directives['language_level'])
+
+ result.update(new_directives)
+
s.next()
return result
+
def p_module(s, pxd, full_module_name, ctx=Ctx):
pos = s.position()
directive_comments = p_compiler_directive_comments(s)
s.parse_comments = False
- if 'language_level' in directive_comments:
- s.context.set_language_level(directive_comments['language_level'])
+ if s.context.language_level is None:
+ s.context.set_language_level(2)
+ if pos[0].filename:
+ import warnings
+ warnings.warn(
+ "Cython directive 'language_level' not set, using 2 for now (Py2). "
+ "This will change in a later release! File: %s" % pos[0].filename,
+ FutureWarning,
+ stacklevel=1 if cython.compiled else 2,
+ )
doc = p_doc_string(s)
if pxd:
diff -Nru cython-0.26.1/Cython/Compiler/Pipeline.py cython-0.29.14/Cython/Compiler/Pipeline.py
--- cython-0.26.1/Cython/Compiler/Pipeline.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Pipeline.py 2018-11-24 09:20:06.000000000 +0000
@@ -6,7 +6,6 @@
from . import Errors
from . import DebugFlags
from . import Options
-from .Visitor import CythonTransform
from .Errors import CompileError, InternalError, AbortError
from . import Naming
@@ -142,7 +141,7 @@
assert mode in ('pyx', 'py', 'pxd')
from .Visitor import PrintTree
from .ParseTreeTransforms import WithTransform, NormalizeTree, PostParse, PxdPostParse
- from .ParseTreeTransforms import ForwardDeclareTypes, AnalyseDeclarationsTransform
+ from .ParseTreeTransforms import ForwardDeclareTypes, InjectGilHandling, AnalyseDeclarationsTransform
from .ParseTreeTransforms import AnalyseExpressionsTransform, FindInvalidUseOfFusedTypes
from .ParseTreeTransforms import CreateClosureClasses, MarkClosureVisitor, DecoratorTransform
from .ParseTreeTransforms import TrackNumpyAttributes, InterpretCompilerDirectives, TransformBuiltinMethods
@@ -183,7 +182,7 @@
NormalizeTree(context),
PostParse(context),
_specific_post_parse,
- TrackNumpyAttributes(context),
+ TrackNumpyAttributes(),
InterpretCompilerDirectives(context, context.compiler_directives),
ParallelRangeTransform(context),
AdjustDefByDirectives(context),
@@ -195,6 +194,7 @@
FlattenInListTransform(),
DecoratorTransform(context),
ForwardDeclareTypes(context),
+ InjectGilHandling(),
AnalyseDeclarationsTransform(context),
AutoTestDictTransform(context),
EmbedSignature(context),
@@ -324,8 +324,15 @@
# Running a pipeline
#
+_pipeline_entry_points = {}
+
+
def run_pipeline(pipeline, source, printtree=True):
from .Visitor import PrintTree
+ exec_ns = globals().copy() if DebugFlags.debug_verbose_pipeline else None
+
+ def run(phase, data):
+ return phase(data)
error = None
data = source
@@ -333,12 +340,19 @@
try:
for phase in pipeline:
if phase is not None:
+ if not printtree and isinstance(phase, PrintTree):
+ continue
if DebugFlags.debug_verbose_pipeline:
t = time()
print("Entering pipeline phase %r" % phase)
- if not printtree and isinstance(phase, PrintTree):
- continue
- data = phase(data)
+ # create a new wrapper for each step to show the name in profiles
+ phase_name = getattr(phase, '__name__', type(phase).__name__)
+ try:
+ run = _pipeline_entry_points[phase_name]
+ except KeyError:
+ exec("def %s(phase, data): return phase(data)" % phase_name, exec_ns)
+ run = _pipeline_entry_points[phase_name] = exec_ns[phase_name]
+ data = run(phase, data)
if DebugFlags.debug_verbose_pipeline:
print(" %.3f seconds" % (time() - t))
except CompileError as err:
diff -Nru cython-0.26.1/Cython/Compiler/PyrexTypes.py cython-0.29.14/Cython/Compiler/PyrexTypes.py
--- cython-0.26.1/Cython/Compiler/PyrexTypes.py 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/PyrexTypes.py 2018-11-24 09:20:06.000000000 +0000
@@ -192,7 +192,8 @@
# is_pythran_expr boolean Is Pythran expr
# is_numpy_buffer boolean Is Numpy array buffer
# has_attributes boolean Has C dot-selectable attributes
- # default_value string Initial value
+ # default_value string Initial value that can be assigned before first user assignment.
+ # declaration_value string The value statically assigned on declaration (if any).
# entry Entry The Entry for this type
#
# declaration_code(entity_code,
@@ -254,6 +255,7 @@
is_numpy_buffer = 0
has_attributes = 0
default_value = ""
+ declaration_value = ""
def resolve(self):
# If a typedef, returns the base type.
@@ -314,6 +316,21 @@
def needs_nonecheck(self):
return 0
+ def _assign_from_py_code(self, source_code, result_code, error_pos, code,
+ from_py_function=None, error_condition=None, extra_args=None):
+ args = ', ' + ', '.join('%s' % arg for arg in extra_args) if extra_args else ''
+ convert_call = "%s(%s%s)" % (
+ from_py_function or self.from_py_function,
+ source_code,
+ args,
+ )
+ if self.is_enum:
+ convert_call = typecast(self, c_long_type, convert_call)
+ return '%s = %s; %s' % (
+ result_code,
+ convert_call,
+ code.error_goto_if(error_condition or self.error_condition(result_code), error_pos))
+
def public_decl(base_code, dll_linkage):
if dll_linkage:
@@ -491,12 +508,11 @@
def from_py_call_code(self, source_code, result_code, error_pos, code,
from_py_function=None, error_condition=None):
- if from_py_function is None:
- from_py_function = self.from_py_function
- if error_condition is None:
- error_condition = self.error_condition(result_code)
return self.typedef_base_type.from_py_call_code(
- source_code, result_code, error_pos, code, from_py_function, error_condition)
+ source_code, result_code, error_pos, code,
+ from_py_function or self.from_py_function,
+ error_condition or self.error_condition(result_code)
+ )
def overflow_check_binop(self, binop, env, const_rhs=False):
env.use_utility_code(UtilityCode.load("Common", "Overflow.c"))
@@ -578,9 +594,9 @@
the packing specifiers specify how the array elements are layed-out
in memory.
- 'contig' -- The data are contiguous in memory along this dimension.
+ 'contig' -- The data is contiguous in memory along this dimension.
At most one dimension may be specified as 'contig'.
- 'strided' -- The data aren't contiguous along this dimenison.
+ 'strided' -- The data isn't contiguous along this dimension.
'follow' -- Used for C/Fortran contiguous arrays, a 'follow' dimension
has its stride automatically computed from extents of the other
dimensions to ensure C or Fortran memory layout.
@@ -619,6 +635,7 @@
def same_as_resolved_type(self, other_type):
return ((other_type.is_memoryviewslice and
+ #self.writable_needed == other_type.writable_needed and # FIXME: should be only uni-directional
self.dtype.same_as(other_type.dtype) and
self.axes == other_type.axes) or
other_type is error_type)
@@ -636,8 +653,9 @@
assert not pyrex
assert not dll_linkage
from . import MemoryView
+ base_code = str(self) if for_display else MemoryView.memviewslice_cname
return self.base_declaration_code(
- MemoryView.memviewslice_cname,
+ base_code,
entity_code)
def attributes_known(self):
@@ -694,8 +712,12 @@
to_axes_c = follow_dim * (ndim - 1) + contig_dim
to_axes_f = contig_dim + follow_dim * (ndim -1)
- to_memview_c = MemoryViewSliceType(self.dtype, to_axes_c)
- to_memview_f = MemoryViewSliceType(self.dtype, to_axes_f)
+ dtype = self.dtype
+ if dtype.is_const:
+ dtype = dtype.const_base_type
+
+ to_memview_c = MemoryViewSliceType(dtype, to_axes_c)
+ to_memview_f = MemoryViewSliceType(dtype, to_axes_f)
for to_memview, cython_name in [(to_memview_c, "copy"),
(to_memview_f, "copy_fortran")]:
@@ -716,10 +738,9 @@
elif attribute in ("is_c_contig", "is_f_contig"):
# is_c_contig and is_f_contig functions
- for (c_or_f, cython_name) in (('c', 'is_c_contig'), ('f', 'is_f_contig')):
+ for (c_or_f, cython_name) in (('C', 'is_c_contig'), ('F', 'is_f_contig')):
- is_contig_name = \
- MemoryView.get_is_contig_func_name(c_or_f, self.ndim)
+ is_contig_name = MemoryView.get_is_contig_func_name(c_or_f, self.ndim)
cfunctype = CFuncType(
return_type=c_bint_type,
@@ -733,8 +754,7 @@
defining=1,
cname=is_contig_name)
- entry.utility_code_definition = MemoryView.get_is_contig_utility(
- attribute == 'is_c_contig', self.ndim)
+ entry.utility_code_definition = MemoryView.get_is_contig_utility(c_or_f, self.ndim)
return True
@@ -767,7 +787,21 @@
src = self
- if src.dtype != dst.dtype:
+ #if not copying and self.writable_needed and not dst.writable_needed:
+ # return False
+
+ src_dtype, dst_dtype = src.dtype, dst.dtype
+ if dst_dtype.is_const:
+ # Requesting read-only views is always ok => consider only the non-const base type.
+ dst_dtype = dst_dtype.const_base_type
+ if src_dtype.is_const:
+ # When assigning between read-only views, compare only the non-const base types.
+ src_dtype = src_dtype.const_base_type
+ elif copying and src_dtype.is_const:
+ # Copying by value => ignore const on source.
+ src_dtype = src_dtype.const_base_type
+
+ if src_dtype != dst_dtype:
return False
if src.ndim != dst.ndim:
@@ -856,7 +890,6 @@
return TempitaUtilityCode.load(
"ObjectToMemviewSlice", "MemoryView_C.c", context=context)
- env.use_utility_code(Buffer.acquire_utility_code)
env.use_utility_code(MemoryView.memviewslice_init_code)
env.use_utility_code(LazyUtilityCode(lazy_utility_callback))
@@ -886,11 +919,12 @@
def from_py_call_code(self, source_code, result_code, error_pos, code,
from_py_function=None, error_condition=None):
- return '%s = %s(%s); %s' % (
- result_code,
- from_py_function or self.from_py_function,
- source_code,
- code.error_goto_if(error_condition or self.error_condition(result_code), error_pos))
+ # NOTE: auto-detection of readonly buffers is disabled:
+ # writable = self.writable_needed or not self.dtype.is_const
+ writable = not self.dtype.is_const
+ return self._assign_from_py_code(
+ source_code, result_code, error_pos, code, from_py_function, error_condition,
+ extra_args=['PyBUF_WRITABLE' if writable else '0'])
def create_to_py_utility_code(self, env):
self._dtype_to_py_func, self._dtype_from_py_func = self.dtype_object_conversion_funcs(env)
@@ -918,25 +952,29 @@
if self.dtype.is_pyobject:
utility_name = "MemviewObjectToObject"
else:
- to_py = self.dtype.create_to_py_utility_code(env)
- from_py = self.dtype.create_from_py_utility_code(env)
- if not (to_py or from_py):
- return "NULL", "NULL"
+ self.dtype.create_to_py_utility_code(env)
+ to_py_function = self.dtype.to_py_function
- if not self.dtype.to_py_function:
- get_function = "NULL"
+ from_py_function = None
+ if not self.dtype.is_const:
+ self.dtype.create_from_py_utility_code(env)
+ from_py_function = self.dtype.from_py_function
- if not self.dtype.from_py_function:
+ if not (to_py_function or from_py_function):
+ return "NULL", "NULL"
+ if not to_py_function:
+ get_function = "NULL"
+ if not from_py_function:
set_function = "NULL"
utility_name = "MemviewDtypeToObject"
error_condition = (self.dtype.error_condition('value') or
'PyErr_Occurred()')
context.update(
- to_py_function = self.dtype.to_py_function,
- from_py_function = self.dtype.from_py_function,
- dtype = self.dtype.empty_declaration_code(),
- error_condition = error_condition,
+ to_py_function=to_py_function,
+ from_py_function=from_py_function,
+ dtype=self.dtype.empty_declaration_code(),
+ error_condition=error_condition,
)
utility = TempitaUtilityCode.load_cached(
@@ -1086,6 +1124,7 @@
name = "object"
is_pyobject = 1
default_value = "0"
+ declaration_value = "0"
buffer_defaults = None
is_extern = False
is_subclassed = False
@@ -1305,14 +1344,17 @@
# vtabstruct_cname string Name of C method table struct
# vtabptr_cname string Name of pointer to C method table
# vtable_cname string Name of C method table definition
+ # early_init boolean Whether to initialize early (as opposed to during module execution).
# defered_declarations [thunk] Used to declare class hierarchies in order
+ # check_size 'warn', 'error', 'ignore' What to do if tp_basicsize does not match
is_extension_type = 1
has_attributes = 1
+ early_init = 1
objtypedef_cname = None
- def __init__(self, name, typedef_flag, base_type, is_external=0):
+ def __init__(self, name, typedef_flag, base_type, is_external=0, check_size=None):
self.name = name
self.scope = None
self.typedef_flag = typedef_flag
@@ -1328,6 +1370,7 @@
self.vtabptr_cname = None
self.vtable_cname = None
self.is_external = is_external
+ self.check_size = check_size or 'warn'
self.defered_declarations = []
def set_scope(self, scope):
@@ -1468,16 +1511,15 @@
def from_py_call_code(self, source_code, result_code, error_pos, code,
from_py_function=None, error_condition=None):
- return '%s = %s(%s); %s' % (
- result_code,
- from_py_function or self.from_py_function,
- source_code,
- code.error_goto_if(error_condition or self.error_condition(result_code), error_pos))
+ return self._assign_from_py_code(
+ source_code, result_code, error_pos, code, from_py_function, error_condition)
+
+
class PythranExpr(CType):
# Pythran object of a given type
- to_py_function = "to_python_from_expr"
+ to_py_function = "__Pyx_pythran_to_python"
is_pythran_expr = True
writable = True
has_attributes = 1
@@ -1490,25 +1532,32 @@
self.from_py_function = "from_python<%s>" % (self.pythran_type)
self.scope = None
- def declaration_code(self, entity_code, for_display = 0, dll_linkage = None, pyrex = 0):
- assert pyrex == 0
- return "%s %s" % (self.name, entity_code)
+ def declaration_code(self, entity_code, for_display=0, dll_linkage=None, pyrex=0):
+ assert not pyrex
+ return "%s %s" % (self.cname, entity_code)
def attributes_known(self):
if self.scope is None:
from . import Symtab
- self.scope = scope = Symtab.CClassScope(
- '',
- None,
- visibility="extern")
+ # FIXME: fake C scope, might be better represented by a struct or C++ class scope
+ self.scope = scope = Symtab.CClassScope('', None, visibility="extern")
scope.parent_type = self
scope.directives = {}
- # rank 3 == long
- scope.declare_var("shape", CPtrType(CIntType(3)), None, cname="_shape", is_cdef=True)
- scope.declare_var("ndim", CIntType(3), None, cname="value", is_cdef=True)
+ scope.declare_var("shape", CPtrType(c_long_type), None, cname="_shape", is_cdef=True)
+ scope.declare_var("ndim", c_long_type, None, cname="value", is_cdef=True)
return True
+ def __eq__(self, other):
+ return isinstance(other, PythranExpr) and self.pythran_type == other.pythran_type
+
+ def __ne__(self, other):
+ return not (isinstance(other, PythranExpr) and self.pythran_type == other.pythran_type)
+
+ def __hash__(self):
+ return hash(self.pythran_type)
+
+
class CConstType(BaseType):
is_const = 1
@@ -1553,6 +1602,12 @@
self.to_py_function = self.const_base_type.to_py_function
return True
+ def same_as_resolved_type(self, other_type):
+ if other_type.is_const:
+ return self.const_base_type.same_as_resolved_type(other_type.const_base_type)
+ # Accept const LHS <- non-const RHS.
+ return self.const_base_type.same_as_resolved_type(other_type)
+
def __getattr__(self, name):
return getattr(self.const_base_type, name)
@@ -1723,15 +1778,13 @@
ForbidUse = ForbidUseClass()
-class CIntType(CNumericType):
-
- is_int = 1
- typedef_flag = 0
+class CIntLike(object):
+ """Mixin for shared behaviour of C integers and enums.
+ """
to_py_function = None
from_py_function = None
to_pyunicode_utility = None
default_format_spec = 'd'
- exception_value = -1
def can_coerce_to_pyobject(self, env):
return True
@@ -1739,6 +1792,24 @@
def can_coerce_from_pyobject(self, env):
return True
+ def create_to_py_utility_code(self, env):
+ if type(self).to_py_function is None:
+ self.to_py_function = "__Pyx_PyInt_From_" + self.specialization_name()
+ env.use_utility_code(TempitaUtilityCode.load_cached(
+ "CIntToPy", "TypeConversion.c",
+ context={"TYPE": self.empty_declaration_code(),
+ "TO_PY_FUNCTION": self.to_py_function}))
+ return True
+
+ def create_from_py_utility_code(self, env):
+ if type(self).from_py_function is None:
+ self.from_py_function = "__Pyx_PyInt_As_" + self.specialization_name()
+ env.use_utility_code(TempitaUtilityCode.load_cached(
+ "CIntFromPy", "TypeConversion.c",
+ context={"TYPE": self.empty_declaration_code(),
+ "FROM_PY_FUNCTION": self.from_py_function}))
+ return True
+
@staticmethod
def _parse_format(format_spec):
padding = ' '
@@ -1781,23 +1852,12 @@
format_type, width, padding_char = self._parse_format(format_spec)
return "%s(%s, %d, '%s', '%s')" % (utility_code_name, cvalue, width, padding_char, format_type)
- def create_to_py_utility_code(self, env):
- if type(self).to_py_function is None:
- self.to_py_function = "__Pyx_PyInt_From_" + self.specialization_name()
- env.use_utility_code(TempitaUtilityCode.load_cached(
- "CIntToPy", "TypeConversion.c",
- context={"TYPE": self.empty_declaration_code(),
- "TO_PY_FUNCTION": self.to_py_function}))
- return True
- def create_from_py_utility_code(self, env):
- if type(self).from_py_function is None:
- self.from_py_function = "__Pyx_PyInt_As_" + self.specialization_name()
- env.use_utility_code(TempitaUtilityCode.load_cached(
- "CIntFromPy", "TypeConversion.c",
- context={"TYPE": self.empty_declaration_code(),
- "FROM_PY_FUNCTION": self.from_py_function}))
- return True
+class CIntType(CIntLike, CNumericType):
+
+ is_int = 1
+ typedef_flag = 0
+ exception_value = -1
def get_to_py_type_conversion(self):
if self.rank < list(rank_to_type_name).index('int'):
@@ -2214,6 +2274,25 @@
}
+class CPyTSSTType(CType):
+ #
+ # PEP-539 "Py_tss_t" type
+ #
+
+ declaration_value = "Py_tss_NEEDS_INIT"
+
+ def __repr__(self):
+ return ""
+
+ def declaration_code(self, entity_code,
+ for_display=0, dll_linkage=None, pyrex=0):
+ if pyrex or for_display:
+ base_code = "Py_tss_t"
+ else:
+ base_code = public_decl("Py_tss_t", dll_linkage)
+ return self.base_declaration_code(base_code, entity_code)
+
+
class CPointerBaseType(CType):
# common base type for pointer/array types
#
@@ -2404,6 +2483,7 @@
def from_py_call_code(self, source_code, result_code, error_pos, code,
from_py_function=None, error_condition=None):
+ assert not error_condition, '%s: %s' % (error_pos, error_condition)
call_code = "%s(%s, %s, %s)" % (
from_py_function or self.from_py_function,
source_code, result_code, self.size)
@@ -2630,7 +2710,11 @@
return self.same_c_signature_as_resolved_type(
other_type.resolve(), as_cmethod)
- def same_c_signature_as_resolved_type(self, other_type, as_cmethod = 0, as_pxd_definition = 0):
+ def same_c_signature_as_resolved_type(self, other_type, as_cmethod=False, as_pxd_definition=False,
+ exact_semantics=True):
+ # If 'exact_semantics' is false, allow any equivalent C signatures
+ # if the Cython semantics are compatible, i.e. the same or wider for 'other_type'.
+
#print "CFuncType.same_c_signature_as_resolved_type:", \
# self, other_type, "as_cmethod =", as_cmethod ###
if other_type is error_type:
@@ -2661,9 +2745,12 @@
return 0
if not self.same_calling_convention_as(other_type):
return 0
- if self.exception_check != other_type.exception_check:
- return 0
- if not self._same_exception_value(other_type.exception_value):
+ if exact_semantics:
+ if self.exception_check != other_type.exception_check:
+ return 0
+ if not self._same_exception_value(other_type.exception_value):
+ return 0
+ elif not self._is_exception_compatible_with(other_type):
return 0
return 1
@@ -2715,14 +2802,25 @@
return 0
if self.nogil != other_type.nogil:
return 0
- if not self.exception_check and other_type.exception_check:
- # a redundant exception check doesn't make functions incompatible, but a missing one does
- return 0
- if not self._same_exception_value(other_type.exception_value):
+ if not self._is_exception_compatible_with(other_type):
return 0
self.original_sig = other_type.original_sig or other_type
return 1
+ def _is_exception_compatible_with(self, other_type):
+ # narrower exception checks are ok, but prevent mismatches
+ if self.exception_check == '+' and other_type.exception_check != '+':
+ # must catch C++ exceptions if we raise them
+ return 0
+ if not other_type.exception_check or other_type.exception_value is not None:
+ # if other does not *always* check exceptions, self must comply
+ if not self._same_exception_value(other_type.exception_value):
+ return 0
+ if self.exception_check and self.exception_check != other_type.exception_check:
+ # a redundant exception check doesn't make functions incompatible, but a missing one does
+ return 0
+ return 1
+
def narrower_c_signature_than(self, other_type, as_cmethod = 0):
return self.narrower_c_signature_than_resolved_type(other_type.resolve(), as_cmethod)
@@ -2767,13 +2865,18 @@
sc2 = other.calling_convention == '__stdcall'
return sc1 == sc2
- def same_as_resolved_type(self, other_type, as_cmethod = 0):
- return self.same_c_signature_as_resolved_type(other_type, as_cmethod) \
+ def same_as_resolved_type(self, other_type, as_cmethod=False):
+ return self.same_c_signature_as_resolved_type(other_type, as_cmethod=as_cmethod) \
and self.nogil == other_type.nogil
- def pointer_assignable_from_resolved_type(self, other_type):
- return self.same_c_signature_as_resolved_type(other_type) \
- and not (self.nogil and not other_type.nogil)
+ def pointer_assignable_from_resolved_type(self, rhs_type):
+ # Accept compatible exception/nogil declarations for the RHS.
+ if rhs_type is error_type:
+ return 1
+ if not rhs_type.is_cfunction:
+ return 0
+ return rhs_type.same_c_signature_as_resolved_type(self, exact_semantics=False) \
+ and not (self.nogil and not rhs_type.nogil)
def declaration_code(self, entity_code,
for_display = 0, dll_linkage = None, pyrex = 0,
@@ -2885,12 +2988,10 @@
elif self.cached_specialized_types is not None:
return self.cached_specialized_types
- cfunc_entries = self.entry.scope.cfunc_entries
- cfunc_entries.remove(self.entry)
-
result = []
permutations = self.get_all_specialized_permutations()
+ new_cfunc_entries = []
for cname, fused_to_specific in permutations:
new_func_type = self.entry.type.specialize(fused_to_specific)
@@ -2905,7 +3006,15 @@
new_func_type.entry = new_entry
result.append(new_func_type)
- cfunc_entries.append(new_entry)
+ new_cfunc_entries.append(new_entry)
+
+ cfunc_entries = self.entry.scope.cfunc_entries
+ try:
+ cindex = cfunc_entries.index(self.entry)
+ except ValueError:
+ cfunc_entries.extend(new_cfunc_entries)
+ else:
+ cfunc_entries[cindex:cindex+1] = new_cfunc_entries
self.cached_specialized_types = result
@@ -3119,15 +3228,18 @@
or_none = False
accept_none = True
accept_builtin_subtypes = False
+ annotation = None
subtypes = ['type']
- def __init__(self, name, type, pos, cname=None):
+ def __init__(self, name, type, pos, cname=None, annotation=None):
self.name = name
if cname is not None:
self.cname = cname
else:
self.cname = Naming.var_prefix + name
+ if annotation is not None:
+ self.annotation = annotation
self.type = type
self.pos = pos
self.needs_type_test = False # TODO: should these defaults be set in analyse_types()?
@@ -3141,6 +3253,7 @@
def specialize(self, values):
return CFuncTypeArg(self.name, self.type.specialize(values), self.pos, self.cname)
+
class ToPyStructUtilityCode(object):
requires = None
@@ -3168,7 +3281,8 @@
code.putln("%s {" % self.header)
code.putln("PyObject* res;")
code.putln("PyObject* member;")
- code.putln("res = PyDict_New(); if (unlikely(!res)) return NULL;")
+ code.putln("res = __Pyx_PyDict_NewPresized(%d); if (unlikely(!res)) return NULL;" %
+ len(self.type.scope.var_entries))
for member in self.type.scope.var_entries:
nameconst_cname = code.get_py_string_const(member.name, identifier=True)
code.putln("%s; if (unlikely(!member)) goto bad;" % (
@@ -3425,13 +3539,17 @@
return ''
def can_coerce_from_pyobject(self, env):
- if self.cname in builtin_cpp_conversions or self.cname in cpp_string_conversions:
+ if self.cname in builtin_cpp_conversions:
+ template_count = builtin_cpp_conversions[self.cname]
for ix, T in enumerate(self.templates or []):
- if ix >= builtin_cpp_conversions[self.cname]:
+ if ix >= template_count:
break
if T.is_pyobject or not T.can_coerce_from_pyobject(env):
return False
return True
+ elif self.cname in cpp_string_conversions:
+ return True
+ return False
def create_from_py_utility_code(self, env):
if self.from_py_function is not None:
@@ -3666,13 +3784,16 @@
if other_type.is_cpp_class:
if self == other_type:
return 1
+ # This messy logic is needed due to GH Issue #1852.
elif (self.cname == other_type.cname and
- self.template_type and other_type.template_type):
+ (self.template_type and other_type.template_type
+ or self.templates
+ or other_type.templates)):
if self.templates == other_type.templates:
return 1
for t1, t2 in zip(self.templates, other_type.templates):
if is_optional_template_param(t1) and is_optional_template_param(t2):
- break
+ break
if not t1.same_as_resolved_type(t2):
return 0
return 1
@@ -3684,6 +3805,8 @@
return True
elif other_type.is_cpp_class:
return other_type.is_subclass(self)
+ elif other_type.is_string and self.cname in cpp_string_conversions:
+ return True
def attributes_known(self):
return self.scope is not None
@@ -3701,6 +3824,23 @@
func_type = func_type.base_type
return func_type.return_type
+ def get_constructor(self, pos):
+ constructor = self.scope.lookup('')
+ if constructor is not None:
+ return constructor
+
+ # Otherwise: automatically declare no-args default constructor.
+ # Make it "nogil" if the base classes allow it.
+ nogil = True
+ for base in self.base_classes:
+ base_constructor = base.scope.lookup('')
+ if base_constructor and not base_constructor.type.nogil:
+ nogil = False
+ break
+
+ func_type = CFuncType(self, [], exception_check='+', nogil=nogil)
+ return self.scope.declare_cfunction(u'', func_type, pos)
+
def check_nullary_constructor(self, pos, msg="stack allocated"):
constructor = self.scope.lookup(u'')
if constructor is not None and best_match([], constructor.all_alternatives()) is None:
@@ -3754,7 +3894,7 @@
return isinstance(type, TemplatePlaceholderType) and type.optional
-class CEnumType(CType):
+class CEnumType(CIntLike, CType):
# name string
# cname string or None
# typedef_flag boolean
@@ -3802,38 +3942,6 @@
self.name, self.cname, self.typedef_flag, namespace)
return self
- def can_coerce_to_pyobject(self, env):
- return True
-
- def can_coerce_from_pyobject(self, env):
- return True
-
- def create_to_py_utility_code(self, env):
- self.to_py_function = "__Pyx_PyInt_From_" + self.specialization_name()
- env.use_utility_code(TempitaUtilityCode.load_cached(
- "CIntToPy", "TypeConversion.c",
- context={"TYPE": self.empty_declaration_code(),
- "TO_PY_FUNCTION": self.to_py_function}))
- return True
-
- def create_from_py_utility_code(self, env):
- self.from_py_function = "__Pyx_PyInt_As_" + self.specialization_name()
- env.use_utility_code(TempitaUtilityCode.load_cached(
- "CIntFromPy", "TypeConversion.c",
- context={"TYPE": self.empty_declaration_code(),
- "FROM_PY_FUNCTION": self.from_py_function}))
- return True
-
- def from_py_call_code(self, source_code, result_code, error_pos, code,
- from_py_function=None, error_condition=None):
- rhs = "%s(%s)" % (
- from_py_function or self.from_py_function,
- source_code)
- return '%s = %s;%s' % (
- result_code,
- typecast(self, c_long_type, rhs),
- ' %s' % code.error_goto_if(error_condition or self.error_condition(result_code), error_pos))
-
def create_type_wrapper(self, env):
from .UtilityCode import CythonUtilityCode
env.use_utility_code(CythonUtilityCode.load(
@@ -4055,6 +4163,9 @@
c_threadstate_type = CStructOrUnionType("PyThreadState", "struct", None, 1, "PyThreadState")
c_threadstate_ptr_type = CPtrType(c_threadstate_type)
+# PEP-539 "Py_tss_t" type
+c_pytss_t_type = CPyTSSTType()
+
# the Py_buffer type is defined in Builtin.py
c_py_buffer_type = CStructOrUnionType("Py_buffer", "struct", None, 1, "Py_buffer")
c_py_buffer_ptr_type = CPtrType(c_py_buffer_type)
@@ -4120,6 +4231,7 @@
#
(1, 0, "void"): c_void_type,
+ (1, 0, "Py_tss_t"): c_pytss_t_type,
(1, 0, "bint"): c_bint_type,
(0, 0, "Py_UNICODE"): c_py_unicode_type,
diff -Nru cython-0.26.1/Cython/Compiler/Pythran.py cython-0.29.14/Cython/Compiler/Pythran.py
--- cython-0.26.1/Cython/Compiler/Pythran.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Pythran.py 2019-07-07 08:37:19.000000000 +0000
@@ -1,19 +1,35 @@
-from .PyrexTypes import BufferType, CType, CTypedefType, CStructOrUnionType
+# cython: language_level=3
+
+from __future__ import absolute_import
+
+from .PyrexTypes import CType, CTypedefType, CStructOrUnionType
+
+import cython
+
+try:
+ import pythran
+ pythran_is_pre_0_9 = tuple(map(int, pythran.__version__.split('.')[0:2])) < (0, 9)
+except ImportError:
+ pythran = None
+ pythran_is_pre_0_9 = True
+
-_pythran_var_prefix = "__pythran__"
# Pythran/Numpy specific operations
+
def has_np_pythran(env):
- while not env is None:
- if hasattr(env, "directives") and env.directives.get('np_pythran', False):
- return True
- env = env.outer_scope
+ if env is None:
+ return False
+ directives = getattr(env, 'directives', None)
+ return (directives and directives.get('np_pythran', False))
+@cython.ccall
def is_pythran_supported_dtype(type_):
if isinstance(type_, CTypedefType):
return is_pythran_supported_type(type_.typedef_base_type)
return type_.is_numeric
-def pythran_type(Ty,ptype="ndarray"):
+
+def pythran_type(Ty, ptype="ndarray"):
if Ty.is_buffer:
ndim,dtype = Ty.ndim, Ty.dtype
if isinstance(dtype, CStructOrUnionType):
@@ -23,109 +39,166 @@
elif isinstance(dtype, CTypedefType):
ctype = dtype.typedef_cname
else:
- raise ValueError("unsupported type %s!" % str(dtype))
- return "pythonic::types::%s<%s,%d>" % (ptype,ctype, ndim)
- from .PyrexTypes import PythranExpr
+ raise ValueError("unsupported type %s!" % dtype)
+ if pythran_is_pre_0_9:
+ return "pythonic::types::%s<%s,%d>" % (ptype,ctype, ndim)
+ else:
+ return "pythonic::types::%s<%s,pythonic::types::pshape<%s>>" % (ptype,ctype, ",".join(("long",)*ndim))
if Ty.is_pythran_expr:
return Ty.pythran_type
#if Ty.is_none:
# return "decltype(pythonic::__builtin__::None)"
if Ty.is_numeric:
return Ty.sign_and_name()
- raise ValueError("unsupported pythran type %s (%s)" % (str(Ty), str(type(Ty))))
- return None
+ raise ValueError("unsupported pythran type %s (%s)" % (Ty, type(Ty)))
+
+@cython.cfunc
def type_remove_ref(ty):
return "typename std::remove_reference<%s>::type" % ty
+
def pythran_binop_type(op, tA, tB):
- return "decltype(std::declval<%s>() %s std::declval<%s>())" % \
- (pythran_type(tA), op, pythran_type(tB))
+ if op == '**':
+ return 'decltype(pythonic::numpy::functor::power{}(std::declval<%s>(), std::declval<%s>()))' % (
+ pythran_type(tA), pythran_type(tB))
+ else:
+ return "decltype(std::declval<%s>() %s std::declval<%s>())" % (
+ pythran_type(tA), op, pythran_type(tB))
+
def pythran_unaryop_type(op, type_):
return "decltype(%sstd::declval<%s>())" % (
op, pythran_type(type_))
+
+@cython.cfunc
+def _index_access(index_code, indices):
+ indexing = ",".join([index_code(idx) for idx in indices])
+ return ('[%s]' if len(indices) == 1 else '(%s)') % indexing
+
+
+def _index_type_code(index_with_type):
+ idx, index_type = index_with_type
+ if idx.is_slice:
+ n = 2 + int(not idx.step.is_none)
+ return "pythonic::__builtin__::functor::slice{}(%s)" % (",".join(["0"]*n))
+ elif index_type.is_int:
+ return "std::declval<%s>()" % index_type.sign_and_name()
+ elif index_type.is_pythran_expr:
+ return "std::declval<%s>()" % index_type.pythran_type
+ raise ValueError("unsupported indexing type %s!" % index_type)
+
+
+def _index_code(idx):
+ if idx.is_slice:
+ values = idx.start, idx.stop, idx.step
+ if idx.step.is_none:
+ func = "contiguous_slice"
+ values = values[:2]
+ else:
+ func = "slice"
+ return "pythonic::types::%s(%s)" % (
+ func, ",".join((v.pythran_result() for v in values)))
+ elif idx.type.is_int:
+ return to_pythran(idx)
+ elif idx.type.is_pythran_expr:
+ return idx.pythran_result()
+ raise ValueError("unsupported indexing type %s" % idx.type)
+
+
def pythran_indexing_type(type_, indices):
- def index_code(idx):
- if idx.is_slice:
- if idx.step.is_none:
- func = "contiguous_slice"
- n = 2
- else:
- func = "slice"
- n = 3
- return "pythonic::types::%s(%s)" % (func,",".join(["0"]*n))
- elif idx.type.is_int:
- return "std::declval()"
- elif idx.type.is_pythran_expr:
- return "std::declval<%s>()" % idx.type.pythran_type
- raise ValueError("unsupported indice type %s!" % idx.type)
- indexing = ",".join(index_code(idx) for idx in indices)
- return type_remove_ref("decltype(std::declval<%s>()(%s))" % (pythran_type(type_), indexing))
+ return type_remove_ref("decltype(std::declval<%s>()%s)" % (
+ pythran_type(type_),
+ _index_access(_index_type_code, indices),
+ ))
+
def pythran_indexing_code(indices):
- def index_code(idx):
- if idx.is_slice:
- values = idx.start, idx.stop, idx.step
- if idx.step.is_none:
- func = "contiguous_slice"
- values = values[:2]
- else:
- func = "slice"
- return "pythonic::types::%s(%s)" % (func,",".join((v.pythran_result() for v in values)))
- elif idx.type.is_int:
- return idx.result()
- elif idx.type.is_pythran_expr:
- return idx.pythran_result()
- raise ValueError("unsupported indice type %s!" % str(idx.type))
- return ",".join(index_code(idx) for idx in indices)
+ return _index_access(_index_code, indices)
+
+def np_func_to_list(func):
+ if not func.is_numpy_attribute:
+ return []
+ return np_func_to_list(func.obj) + [func.attribute]
+
+if pythran is None:
+ def pythran_is_numpy_func_supported(name):
+ return False
+else:
+ def pythran_is_numpy_func_supported(func):
+ CurF = pythran.tables.MODULES['numpy']
+ FL = np_func_to_list(func)
+ for F in FL:
+ CurF = CurF.get(F, None)
+ if CurF is None:
+ return False
+ return True
+
+def pythran_functor(func):
+ func = np_func_to_list(func)
+ submodules = "::".join(func[:-1] + ["functor"])
+ return "pythonic::numpy::%s::%s" % (submodules, func[-1])
def pythran_func_type(func, args):
args = ",".join(("std::declval<%s>()" % pythran_type(a.type) for a in args))
- return "decltype(pythonic::numpy::functor::%s{}(%s))" % (func, args)
+ return "decltype(%s{}(%s))" % (pythran_functor(func), args)
+
-def to_pythran(op,ptype=None):
+@cython.ccall
+def to_pythran(op, ptype=None):
op_type = op.type
- if is_type(op_type,["is_pythran_expr", "is_int", "is_numeric", "is_float",
- "is_complex"]):
+ if op_type.is_int:
+ # Make sure that integer literals always have exactly the type that the templates expect.
+ return op_type.cast_code(op.result())
+ if is_type(op_type, ["is_pythran_expr", "is_numeric", "is_float", "is_complex"]):
return op.result()
if op.is_none:
return "pythonic::__builtin__::None"
if ptype is None:
ptype = pythran_type(op_type)
- assert(op.type.is_pyobject)
+
+ assert op.type.is_pyobject
return "from_python<%s>(%s)" % (ptype, op.py_result())
-def from_pythran():
- return "to_python"
+@cython.cfunc
def is_type(type_, types):
for attr in types:
if getattr(type_, attr, False):
return True
return False
+
def is_pythran_supported_node_or_none(node):
return node.is_none or is_pythran_supported_type(node.type)
+
+@cython.ccall
def is_pythran_supported_type(type_):
pythran_supported = (
- "is_pythran_expr", "is_int", "is_numeric", "is_float", "is_none",
- "is_complex")
+ "is_pythran_expr", "is_int", "is_numeric", "is_float", "is_none", "is_complex")
return is_type(type_, pythran_supported) or is_pythran_expr(type_)
+
def is_pythran_supported_operation_type(type_):
pythran_supported = (
"is_pythran_expr", "is_int", "is_numeric", "is_float", "is_complex")
return is_type(type_,pythran_supported) or is_pythran_expr(type_)
+
+@cython.ccall
def is_pythran_expr(type_):
return type_.is_pythran_expr
+
def is_pythran_buffer(type_):
- return type_.is_numpy_buffer and is_pythran_supported_dtype(type_.dtype) and \
- type_.mode in ("c","strided") and not type_.cast
+ return (type_.is_numpy_buffer and is_pythran_supported_dtype(type_.dtype) and
+ type_.mode in ("c", "strided") and not type_.cast)
+
+def pythran_get_func_include_file(func):
+ func = np_func_to_list(func)
+ return "pythonic/numpy/%s.hpp" % "/".join(func)
def include_pythran_generic(env):
# Generic files
@@ -133,19 +206,13 @@
env.add_include_file("pythonic/python/core.hpp")
env.add_include_file("pythonic/types/bool.hpp")
env.add_include_file("pythonic/types/ndarray.hpp")
- env.add_include_file("") # for placement new
+ env.add_include_file("pythonic/numpy/power.hpp")
+ env.add_include_file("pythonic/__builtin__/slice.hpp")
+ env.add_include_file("") # for placement new
- for i in (8,16,32,64):
+ for i in (8, 16, 32, 64):
env.add_include_file("pythonic/types/uint%d.hpp" % i)
env.add_include_file("pythonic/types/int%d.hpp" % i)
for t in ("float", "float32", "float64", "set", "slice", "tuple", "int",
- "long", "complex", "complex64", "complex128"):
+ "complex", "complex64", "complex128"):
env.add_include_file("pythonic/types/%s.hpp" % t)
-
-def include_pythran_type(env, type_):
- pass
-
-def type_is_numpy(type_):
- if not hasattr(type_, "is_numpy"):
- return False
- return type_.is_numpy
diff -Nru cython-0.26.1/Cython/Compiler/Scanning.pxd cython-0.29.14/Cython/Compiler/Scanning.pxd
--- cython-0.26.1/Cython/Compiler/Scanning.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Scanning.pxd 2019-11-01 14:13:39.000000000 +0000
@@ -14,13 +14,15 @@
cdef dict kwargs
cdef readonly object __name__ # for tracing the scanner
+## methods commented with '##' out are used by Parsing.py when compiled.
+
@cython.final
cdef class CompileTimeScope:
cdef public dict entries
cdef public CompileTimeScope outer
- cdef declare(self, name, value)
- cdef lookup_here(self, name)
- cpdef lookup(self, name)
+ ##cdef declare(self, name, value)
+ ##cdef lookup_here(self, name)
+ ##cpdef lookup(self, name)
@cython.final
cdef class PyrexScanner(Scanner):
@@ -36,7 +38,7 @@
cdef public list indentation_stack
cdef public indentation_char
cdef public int bracket_nesting_level
- cdef bint async_enabled
+ cdef readonly bint async_enabled
cdef public sy
cdef public systring
@@ -51,15 +53,15 @@
@cython.locals(current_level=cython.long, new_level=cython.long)
cpdef indentation_action(self, text)
#cpdef eof_action(self, text)
- cdef next(self)
- cdef peek(self)
+ ##cdef next(self)
+ ##cdef peek(self)
#cpdef put_back(self, sy, systring)
#cdef unread(self, token, value)
- cdef bint expect(self, what, message = *) except -2
- cdef expect_keyword(self, what, message = *)
- cdef expected(self, what, message = *)
- cdef expect_indent(self)
- cdef expect_dedent(self)
- cdef expect_newline(self, message=*, bint ignore_semicolon=*)
- cdef int enter_async(self) except -1
- cdef int exit_async(self) except -1
+ ##cdef bint expect(self, what, message = *) except -2
+ ##cdef expect_keyword(self, what, message = *)
+ ##cdef expected(self, what, message = *)
+ ##cdef expect_indent(self)
+ ##cdef expect_dedent(self)
+ ##cdef expect_newline(self, message=*, bint ignore_semicolon=*)
+ ##cdef int enter_async(self) except -1
+ ##cdef int exit_async(self) except -1
diff -Nru cython-0.26.1/Cython/Compiler/Scanning.py cython-0.29.14/Cython/Compiler/Scanning.py
--- cython-0.26.1/Cython/Compiler/Scanning.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Scanning.py 2019-11-01 14:13:39.000000000 +0000
@@ -1,4 +1,4 @@
-# cython: infer_types=True, language_level=3, py2_import=True
+# cython: infer_types=True, language_level=3, py2_import=True, auto_pickle=False
#
# Cython Scanner
#
@@ -63,6 +63,12 @@
# self.kwargs is almost always unused => avoid call overhead
return method(text, **self.kwargs) if self.kwargs is not None else method(text)
+ def __copy__(self):
+ return self # immutable, no need to copy
+
+ def __deepcopy__(self, memo):
+ return self # immutable, no need to copy
+
#------------------------------------------------------------------
@@ -141,6 +147,8 @@
"""
A SourceDescriptor should be considered immutable.
"""
+ filename = None
+
_file_type = 'pyx'
_escaped_description = None
@@ -162,7 +170,7 @@
if self._escaped_description is None:
esc_desc = \
self.get_description().encode('ASCII', 'replace').decode("ASCII")
- # Use foreward slashes on Windows since these paths
+ # Use forward slashes on Windows since these paths
# will be used in the #line directives in the C/C++ files.
self._escaped_description = esc_desc.replace('\\', '/')
return self._escaped_description
@@ -188,6 +196,12 @@
except AttributeError:
return False
+ def __copy__(self):
+ return self # immutable, no need to copy
+
+ def __deepcopy__(self, memo):
+ return self # immutable, no need to copy
+
class FileSourceDescriptor(SourceDescriptor):
"""
@@ -262,8 +276,6 @@
Instances of this class can be used instead of a filenames if the
code originates from a string object.
"""
- filename = None
-
def __init__(self, name, code):
self.name = name
#self.set_file_type_from_name(name)
@@ -310,12 +322,25 @@
def __init__(self, file, filename, parent_scanner=None,
scope=None, context=None, source_encoding=None, parse_comments=True, initial_pos=None):
Scanner.__init__(self, get_lexicon(), file, filename, initial_pos)
+
+ if filename.is_python_file():
+ self.in_python_file = True
+ self.keywords = set(py_reserved_words)
+ else:
+ self.in_python_file = False
+ self.keywords = set(pyx_reserved_words)
+
+ self.async_enabled = 0
+
if parent_scanner:
self.context = parent_scanner.context
self.included_files = parent_scanner.included_files
self.compile_time_env = parent_scanner.compile_time_env
self.compile_time_eval = parent_scanner.compile_time_eval
self.compile_time_expr = parent_scanner.compile_time_expr
+
+ if parent_scanner.async_enabled:
+ self.enter_async()
else:
self.context = context
self.included_files = scope.included_files
@@ -326,17 +351,11 @@
self.compile_time_env.update(context.options.compile_time_env)
self.parse_comments = parse_comments
self.source_encoding = source_encoding
- if filename.is_python_file():
- self.in_python_file = True
- self.keywords = set(py_reserved_words)
- else:
- self.in_python_file = False
- self.keywords = set(pyx_reserved_words)
self.trace = trace_scanner
self.indentation_stack = [0]
self.indentation_char = None
self.bracket_nesting_level = 0
- self.async_enabled = 0
+
self.begin('INDENT')
self.sy = ''
self.next()
diff -Nru cython-0.26.1/Cython/Compiler/StringEncoding.py cython-0.29.14/Cython/Compiler/StringEncoding.py
--- cython-0.26.1/Cython/Compiler/StringEncoding.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/StringEncoding.py 2018-09-22 14:18:56.000000000 +0000
@@ -191,6 +191,14 @@
return s
+def encoded_string(s, encoding):
+ assert isinstance(s, (_unicode, bytes))
+ s = EncodedString(s)
+ if encoding is not None:
+ s.encoding = encoding
+ return s
+
+
char_from_escape_sequence = {
r'\a' : u'\a',
r'\b' : u'\b',
diff -Nru cython-0.26.1/Cython/Compiler/Symtab.py cython-0.29.14/Cython/Compiler/Symtab.py
--- cython-0.26.1/Cython/Compiler/Symtab.py 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Symtab.py 2018-12-14 14:27:50.000000000 +0000
@@ -4,8 +4,9 @@
from __future__ import absolute_import
-import copy
import re
+import copy
+import operator
try:
import __builtin__ as builtins
@@ -17,9 +18,10 @@
from . import Options, Naming
from . import PyrexTypes
from .PyrexTypes import py_object_type, unspecified_type
-from .TypeSlots import \
- pyfunction_signature, pymethod_signature, \
- get_special_method_signature, get_property_accessor_signature
+from .TypeSlots import (
+ pyfunction_signature, pymethod_signature, richcmp_special_methods,
+ get_special_method_signature, get_property_accessor_signature)
+from . import Future
from . import Code
@@ -34,13 +36,13 @@
def c_safe_identifier(cname):
# There are some C limitations on struct entry names.
- if ((cname[:2] == '__'
- and not (cname.startswith(Naming.pyrex_prefix)
- or cname in ('__weakref__', '__dict__')))
- or cname in iso_c99_keywords):
+ if ((cname[:2] == '__' and not (cname.startswith(Naming.pyrex_prefix)
+ or cname in ('__weakref__', '__dict__')))
+ or cname in iso_c99_keywords):
cname = Naming.pyrex_prefix + cname
return cname
+
class BufferAux(object):
writable_needed = False
@@ -59,6 +61,7 @@
# cname string C name of entity
# type PyrexType Type of entity
# doc string Doc string
+ # annotation ExprNode PEP 484/526 annotation
# init string Initial value
# visibility 'private' or 'public' or 'extern'
# is_builtin boolean Is an entry in the Python builtins dict
@@ -88,6 +91,7 @@
# is_arg boolean Is the arg of a method
# is_local boolean Is a local variable
# in_closure boolean Is referenced in an inner scope
+ # in_subscope boolean Belongs to a generator expression scope
# is_readonly boolean Can't be assigned to
# func_cname string C func implementing Python func
# func_modifiers [string] C function modifiers ('inline')
@@ -119,7 +123,7 @@
#
# buffer_aux BufferAux or None Extra information needed for buffer variables
# inline_func_in_pxd boolean Hacky special case for inline function in pxd file.
- # Ideally this should not be necesarry.
+ # Ideally this should not be necessary.
# might_overflow boolean In an arithmetic expression that could cause
# overflow (used for type inference).
# utility_code_definition For some Cython builtins, the utility code
@@ -136,6 +140,7 @@
inline_func_in_pxd = False
borrowed = 0
init = ""
+ annotation = None
visibility = 'private'
is_builtin = 0
is_cglobal = 0
@@ -163,6 +168,7 @@
is_local = 0
in_closure = 0
from_closure = 0
+ in_subscope = 0
is_declared_generic = 0
is_readonly = 0
pyfunc_cname = None
@@ -213,9 +219,12 @@
def __repr__(self):
return "%s(<%x>, name=%s, type=%s)" % (type(self).__name__, id(self), self.name, self.type)
+ def already_declared_here(self):
+ error(self.pos, "Previous declaration is here")
+
def redeclared(self, pos):
error(pos, "'%s' does not match previous declaration" % self.name)
- error(self.pos, "Previous declaration is here")
+ self.already_declared_here()
def all_alternatives(self):
return [self] + self.overloaded_alternatives
@@ -299,6 +308,7 @@
is_py_class_scope = 0
is_c_class_scope = 0
is_closure_scope = 0
+ is_genexpr_scope = 0
is_passthrough = 0
is_cpp_class_scope = 0
is_property_scope = 0
@@ -308,6 +318,7 @@
in_cinclude = 0
nogil = 0
fused_to_specific = None
+ return_type = None
def __init__(self, name, outer_scope, parent_scope):
# The outer_scope is the next scope in the lookup chain.
@@ -324,6 +335,7 @@
self.qualified_name = EncodedString(name)
self.scope_prefix = mangled_name
self.entries = {}
+ self.subscopes = set()
self.const_entries = []
self.type_entries = []
self.sue_entries = []
@@ -341,7 +353,6 @@
self.obj_to_entry = {}
self.buffer_entries = []
self.lambda_defs = []
- self.return_type = None
self.id_counters = {}
def __deepcopy__(self, memo):
@@ -419,6 +430,12 @@
""" Return the module-level scope containing this scope. """
return self.outer_scope.builtin_scope()
+ def iter_local_scopes(self):
+ yield self
+ if self.subscopes:
+ for scope in sorted(self.subscopes, key=operator.attrgetter('scope_prefix')):
+ yield scope
+
def declare(self, name, cname, type, pos, visibility, shadow = 0, is_type = 0, create_wrapper = 0):
# Create new entry, and add to dictionary if
# name is not None. Reports a warning if already
@@ -430,17 +447,33 @@
warning(pos, "'%s' is a reserved name in C." % cname, -1)
entries = self.entries
if name and name in entries and not shadow:
- old_type = entries[name].type
- if self.is_cpp_class_scope and type.is_cfunction and old_type.is_cfunction and type != old_type:
- # C++ method overrides are ok
+ old_entry = entries[name]
+
+ # Reject redeclared C++ functions only if they have the same type signature.
+ cpp_override_allowed = False
+ if type.is_cfunction and old_entry.type.is_cfunction and self.is_cpp():
+ for alt_entry in old_entry.all_alternatives():
+ if type == alt_entry.type:
+ if name == '' and not type.args:
+ # Cython pre-declares the no-args constructor - allow later user definitions.
+ cpp_override_allowed = True
+ break
+ else:
+ cpp_override_allowed = True
+
+ if cpp_override_allowed:
+ # C++ function/method overrides with different signatures are ok.
pass
elif self.is_cpp_class_scope and entries[name].is_inherited:
# Likewise ignore inherited classes.
pass
elif visibility == 'extern':
- warning(pos, "'%s' redeclared " % name, 0)
+ # Silenced outside of "cdef extern" blocks, until we have a safe way to
+ # prevent pxd-defined cpdef functions from ending up here.
+ warning(pos, "'%s' redeclared " % name, 1 if self.in_cinclude else 0)
elif visibility != 'ignore':
error(pos, "'%s' redeclared " % name)
+ entries[name].already_declared_here()
entry = Entry(name, cname, type, pos = pos)
entry.in_cinclude = self.in_cinclude
entry.create_wrapper = create_wrapper
@@ -572,6 +605,7 @@
else:
if not (entry.is_type and entry.type.is_cpp_class):
error(pos, "'%s' redeclared " % name)
+ entry.already_declared_here()
return None
elif scope and entry.type.scope:
warning(pos, "'%s' already defined (ignoring second definition)" % name, 0)
@@ -582,11 +616,13 @@
if base_classes:
if entry.type.base_classes and entry.type.base_classes != base_classes:
error(pos, "Base type does not match previous declaration")
+ entry.already_declared_here()
else:
entry.type.base_classes = base_classes
if templates or entry.type.templates:
if templates != entry.type.templates:
error(pos, "Template parameters do not match previous declaration")
+ entry.already_declared_here()
def declare_inherited_attributes(entry, base_classes):
for base_class in base_classes:
@@ -760,6 +796,10 @@
else:
warning(pos, "Function signature does not match previous declaration", 1)
entry.type = type
+ elif not in_pxd and entry.defined_in_pxd and type.compatible_signature_with(entry.type):
+ # TODO: check that this was done by a signature optimisation and not a user error.
+ #warning(pos, "Function signature does not match previous declaration", 1)
+ entry.type = type
else:
error(pos, "Function signature does not match previous declaration")
else:
@@ -789,13 +829,23 @@
type.entry = entry
return entry
- def add_cfunction(self, name, type, pos, cname, visibility, modifiers):
+ def add_cfunction(self, name, type, pos, cname, visibility, modifiers, inherited=False):
# Add a C function entry without giving it a func_cname.
entry = self.declare(name, cname, type, pos, visibility)
entry.is_cfunction = 1
if modifiers:
entry.func_modifiers = modifiers
- self.cfunc_entries.append(entry)
+ if inherited or type.is_fused:
+ self.cfunc_entries.append(entry)
+ else:
+ # For backwards compatibility reasons, we must keep all non-fused methods
+ # before all fused methods, but separately for each type.
+ i = len(self.cfunc_entries)
+ for cfunc_entry in reversed(self.cfunc_entries):
+ if cfunc_entry.is_inherited or not cfunc_entry.type.is_fused:
+ break
+ i -= 1
+ self.cfunc_entries.insert(i, entry)
return entry
def find(self, name, pos):
@@ -858,10 +908,28 @@
if res is not None:
return res
function = self.lookup("operator%s" % operator)
- if function is None:
+ function_alternatives = []
+ if function is not None:
+ function_alternatives = function.all_alternatives()
+
+ # look-up nonmember methods listed within a class
+ method_alternatives = []
+ if len(operands)==2: # binary operators only
+ for n in range(2):
+ if operands[n].type.is_cpp_class:
+ obj_type = operands[n].type
+ method = obj_type.scope.lookup("operator%s" % operator)
+ if method is not None:
+ method_alternatives += method.all_alternatives()
+
+ if (not method_alternatives) and (not function_alternatives):
return None
+
+ # select the unique alternatives
+ all_alternatives = list(set(method_alternatives + function_alternatives))
+
return PyrexTypes.best_match([arg.type for arg in operands],
- function.all_alternatives())
+ all_alternatives)
def lookup_operator_for_types(self, pos, operator, types):
from .Nodes import Node
@@ -876,18 +944,20 @@
def use_entry_utility_code(self, entry):
self.global_scope().use_entry_utility_code(entry)
- def generate_library_function_declarations(self, code):
- # Generate extern decls for C library funcs used.
- pass
-
def defines_any(self, names):
- # Test whether any of the given names are
- # defined in this scope.
+ # Test whether any of the given names are defined in this scope.
for name in names:
if name in self.entries:
return 1
return 0
+ def defines_any_special(self, names):
+ # Test whether any of the given names are defined as special methods in this scope.
+ for name in names:
+ if name in self.entries and self.entries[name].is_special:
+ return 1
+ return 0
+
def infer_types(self):
from .TypeInference import get_type_inferer
get_type_inferer().infer_types(self)
@@ -899,8 +969,8 @@
else:
return outer.is_cpp()
- def add_include_file(self, filename):
- self.outer_scope.add_include_file(filename)
+ def add_include_file(self, filename, verbatim_include=None, late=False):
+ self.outer_scope.add_include_file(filename, verbatim_include, late)
class PreImportScope(Scope):
@@ -933,10 +1003,12 @@
cname, type = definition
self.declare_var(name, type, None, cname)
- def lookup(self, name, language_level=None):
- # 'language_level' is passed by ModuleScope
- if language_level == 3:
- if name == 'str':
+ def lookup(self, name, language_level=None, str_is_str=None):
+ # 'language_level' and 'str_is_str' are passed by ModuleScope
+ if name == 'str':
+ if str_is_str is None:
+ str_is_str = language_level in (None, 2)
+ if not str_is_str:
name = 'unicode'
return Scope.lookup(self, name)
@@ -1039,8 +1111,8 @@
# doc string Module doc string
# doc_cname string C name of module doc string
# utility_code_list [UtilityCode] Queuing utility codes for forwarding to Code.py
- # python_include_files [string] Standard Python headers to be included
- # include_files [string] Other C headers to be included
+ # c_includes {key: IncludeCode} C headers or verbatim code to be generated
+ # See process_include() for more documentation
# string_to_entry {string : Entry} Map string const to entry
# identifier_to_entry {string : Entry} Map identifier string const to entry
# context Context
@@ -1083,8 +1155,7 @@
self.doc_cname = Naming.moddoc_cname
self.utility_code_list = []
self.module_entries = {}
- self.python_include_files = ["Python.h"]
- self.include_files = []
+ self.c_includes = {}
self.type_names = dict(outer_scope.type_names)
self.pxd_file_loaded = 0
self.cimported_modules = []
@@ -1095,8 +1166,10 @@
self.undeclared_cached_builtins = []
self.namespace_cname = self.module_cname
self._cached_tuple_types = {}
- for var_name in ['__builtins__', '__name__', '__file__', '__doc__', '__path__']:
+ for var_name in ['__builtins__', '__name__', '__file__', '__doc__', '__path__',
+ '__spec__', '__loader__', '__package__', '__cached__']:
self.declare_var(EncodedString(var_name), py_object_type, None)
+ self.process_include(Code.IncludeCode("Python.h", initial=True))
def qualifying_scope(self):
return self.parent_module
@@ -1104,15 +1177,18 @@
def global_scope(self):
return self
- def lookup(self, name, language_level=None):
+ def lookup(self, name, language_level=None, str_is_str=None):
entry = self.lookup_here(name)
if entry is not None:
return entry
if language_level is None:
language_level = self.context.language_level if self.context is not None else 3
+ if str_is_str is None:
+ str_is_str = language_level == 2 or (
+ self.context is not None and Future.unicode_literals not in self.context.future_directives)
- return self.outer_scope.lookup(name, language_level=language_level)
+ return self.outer_scope.lookup(name, language_level=language_level, str_is_str=str_is_str)
def declare_tuple_type(self, pos, components):
components = tuple(components)
@@ -1208,10 +1284,6 @@
scope = scope.find_submodule(submodule)
return scope
- def generate_library_function_declarations(self, code):
- if self.directives['np_pythran']:
- code.putln("import_array();")
-
def lookup_submodule(self, name):
# Return scope for submodule of this module, or None.
if '.' in name:
@@ -1223,15 +1295,50 @@
module = module.lookup_submodule(submodule)
return module
- def add_include_file(self, filename):
- if filename not in self.python_include_files \
- and filename not in self.include_files:
- self.include_files.append(filename)
+ def add_include_file(self, filename, verbatim_include=None, late=False):
+ """
+ Add `filename` as include file. Add `verbatim_include` as
+ verbatim text in the C file.
+ Both `filename` and `verbatim_include` can be `None` or empty.
+ """
+ inc = Code.IncludeCode(filename, verbatim_include, late=late)
+ self.process_include(inc)
+
+ def process_include(self, inc):
+ """
+ Add `inc`, which is an instance of `IncludeCode`, to this
+ `ModuleScope`. This either adds a new element to the
+ `c_includes` dict or it updates an existing entry.
+
+ In detail: the values of the dict `self.c_includes` are
+ instances of `IncludeCode` containing the code to be put in the
+ generated C file. The keys of the dict are needed to ensure
+ uniqueness in two ways: if an include file is specified in
+ multiple "cdef extern" blocks, only one `#include` statement is
+ generated. Second, the same include might occur multiple times
+ if we find it through multiple "cimport" paths. So we use the
+ generated code (of the form `#include "header.h"`) as dict key.
+
+ If verbatim code does not belong to any include file (i.e. it
+ was put in a `cdef extern from *` block), then we use a unique
+ dict key: namely, the `sortkey()`.
+
+ One `IncludeCode` object can contain multiple pieces of C code:
+ one optional "main piece" for the include file and several other
+ pieces for the verbatim code. The `IncludeCode.dict_update`
+ method merges the pieces of two different `IncludeCode` objects
+ if needed.
+ """
+ key = inc.mainpiece()
+ if key is None:
+ key = inc.sortkey()
+ inc.dict_update(self.c_includes, key)
+ inc = self.c_includes[key]
def add_imported_module(self, scope):
if scope not in self.cimported_modules:
- for filename in scope.include_files:
- self.add_include_file(filename)
+ for inc in scope.c_includes.values():
+ self.process_include(inc)
self.cimported_modules.append(scope)
for m in scope.cimported_modules:
self.add_imported_module(m)
@@ -1317,8 +1424,8 @@
api=api, in_pxd=in_pxd, is_cdef=is_cdef)
if is_cdef:
entry.is_cglobal = 1
- if entry.type.is_pyobject:
- entry.init = 0
+ if entry.type.declaration_value:
+ entry.init = entry.type.declaration_value
self.var_entries.append(entry)
else:
entry.is_pyglobal = 1
@@ -1372,10 +1479,11 @@
if entry.utility_code_definition:
self.utility_code_list.append(entry.utility_code_definition)
- def declare_c_class(self, name, pos, defining = 0, implementing = 0,
- module_name = None, base_type = None, objstruct_cname = None,
- typeobj_cname = None, typeptr_cname = None, visibility = 'private', typedef_flag = 0, api = 0,
- buffer_defaults = None, shadow = 0):
+ def declare_c_class(self, name, pos, defining=0, implementing=0,
+ module_name=None, base_type=None, objstruct_cname=None,
+ typeobj_cname=None, typeptr_cname=None, visibility='private',
+ typedef_flag=0, api=0, check_size=None,
+ buffer_defaults=None, shadow=0):
# If this is a non-extern typedef class, expose the typedef, but use
# the non-typedef struct internally to avoid needing forward
# declarations for anonymous structs.
@@ -1407,7 +1515,8 @@
# Make a new entry if needed
#
if not entry or shadow:
- type = PyrexTypes.PyExtensionType(name, typedef_flag, base_type, visibility == 'extern')
+ type = PyrexTypes.PyExtensionType(
+ name, typedef_flag, base_type, visibility == 'extern', check_size=check_size)
type.pos = pos
type.buffer_defaults = buffer_defaults
if objtypedef_cname is not None:
@@ -1649,8 +1758,8 @@
entry = Scope.declare_var(self, name, type, pos,
cname=cname, visibility=visibility,
api=api, in_pxd=in_pxd, is_cdef=is_cdef)
- if type.is_pyobject:
- entry.init = "0"
+ if entry.type.declaration_value:
+ entry.init = entry.type.declaration_value
entry.is_local = 1
entry.in_with_gil_block = self._in_with_gil_block
@@ -1670,6 +1779,7 @@
orig_entry = self.lookup_here(name)
if orig_entry and orig_entry.scope is self and not orig_entry.from_closure:
error(pos, "'%s' redeclared as nonlocal" % name)
+ orig_entry.already_declared_here()
else:
entry = self.lookup(name)
if entry is None or not entry.from_closure:
@@ -1680,7 +1790,10 @@
# Return None if not found.
entry = Scope.lookup(self, name)
if entry is not None:
- if entry.scope is not self and entry.scope.is_closure_scope:
+ entry_scope = entry.scope
+ while entry_scope.is_genexpr_scope:
+ entry_scope = entry_scope.outer_scope
+ if entry_scope is not self and entry_scope.is_closure_scope:
if hasattr(entry.scope, "scope_class"):
raise InternalError("lookup() after scope class created.")
# The actual c fragment for the different scopes differs
@@ -1693,18 +1806,19 @@
return entry
def mangle_closure_cnames(self, outer_scope_cname):
- for entry in self.entries.values():
- if entry.from_closure:
- cname = entry.outer_entry.cname
- if self.is_passthrough:
- entry.cname = cname
- else:
- if cname.startswith(Naming.cur_scope_cname):
- cname = cname[len(Naming.cur_scope_cname)+2:]
- entry.cname = "%s->%s" % (outer_scope_cname, cname)
- elif entry.in_closure:
- entry.original_cname = entry.cname
- entry.cname = "%s->%s" % (Naming.cur_scope_cname, entry.cname)
+ for scope in self.iter_local_scopes():
+ for entry in scope.entries.values():
+ if entry.from_closure:
+ cname = entry.outer_entry.cname
+ if self.is_passthrough:
+ entry.cname = cname
+ else:
+ if cname.startswith(Naming.cur_scope_cname):
+ cname = cname[len(Naming.cur_scope_cname)+2:]
+ entry.cname = "%s->%s" % (outer_scope_cname, cname)
+ elif entry.in_closure:
+ entry.original_cname = entry.cname
+ entry.cname = "%s->%s" % (Naming.cur_scope_cname, entry.cname)
class GeneratorExpressionScope(Scope):
@@ -1712,12 +1826,25 @@
to generators, these can be easily inlined in some cases, so all
we really need is a scope that holds the loop variable(s).
"""
+ is_genexpr_scope = True
+
def __init__(self, outer_scope):
- name = outer_scope.global_scope().next_id(Naming.genexpr_id_ref)
- Scope.__init__(self, name, outer_scope, outer_scope)
+ parent_scope = outer_scope
+ # TODO: also ignore class scopes?
+ while parent_scope.is_genexpr_scope:
+ parent_scope = parent_scope.parent_scope
+ name = parent_scope.global_scope().next_id(Naming.genexpr_id_ref)
+ Scope.__init__(self, name, outer_scope, parent_scope)
self.directives = outer_scope.directives
self.genexp_prefix = "%s%d%s" % (Naming.pyrex_prefix, len(name), name)
+ # Class/ExtType scopes are filled at class creation time, i.e. from the
+ # module init function or surrounding function.
+ while outer_scope.is_genexpr_scope or outer_scope.is_c_class_scope or outer_scope.is_py_class_scope:
+ outer_scope = outer_scope.outer_scope
+ self.var_entries = outer_scope.var_entries # keep declarations outside
+ outer_scope.subscopes.add(self)
+
def mangle(self, prefix, name):
return '%s%s' % (self.genexp_prefix, self.parent_scope.mangle(prefix, name))
@@ -1733,8 +1860,12 @@
# this scope must hold its name exclusively
cname = '%s%s' % (self.genexp_prefix, self.parent_scope.mangle(Naming.var_prefix, name or self.next_id()))
entry = self.declare(name, cname, type, pos, visibility)
- entry.is_variable = 1
- entry.is_local = 1
+ entry.is_variable = True
+ if self.parent_scope.is_module_scope:
+ entry.is_cglobal = True
+ else:
+ entry.is_local = True
+ entry.in_subscope = True
self.var_entries.append(entry)
self.entries[name] = entry
return entry
@@ -1780,7 +1911,7 @@
def declare_var(self, name, type, pos,
cname = None, visibility = 'private',
api = 0, in_pxd = 0, is_cdef = 0,
- allow_pyobject = 0):
+ allow_pyobject=False, allow_memoryview=False):
# Add an entry for an attribute.
if not cname:
cname = name
@@ -1792,11 +1923,12 @@
entry.is_variable = 1
self.var_entries.append(entry)
if type.is_pyobject and not allow_pyobject:
- error(pos,
- "C struct/union member cannot be a Python object")
+ error(pos, "C struct/union member cannot be a Python object")
+ elif type.is_memoryviewslice and not allow_memoryview:
+ # Memory views wrap their buffer owner as a Python object.
+ error(pos, "C struct/union member cannot be a memory view")
if visibility != 'private':
- error(pos,
- "C struct/union member cannot be declared %s" % visibility)
+ error(pos, "C struct/union member cannot be declared %s" % visibility)
return entry
def declare_cfunction(self, name, type, pos,
@@ -1881,6 +2013,7 @@
orig_entry = self.lookup_here(name)
if orig_entry and orig_entry.scope is self and not orig_entry.from_closure:
error(pos, "'%s' redeclared as nonlocal" % name)
+ orig_entry.already_declared_here()
else:
entry = self.lookup(name)
if entry is None:
@@ -2039,8 +2172,13 @@
def declare_pyfunction(self, name, pos, allow_redefine=False):
# Add an entry for a method.
- if name in ('__eq__', '__ne__', '__lt__', '__gt__', '__le__', '__ge__'):
- error(pos, "Special method %s must be implemented via __richcmp__" % name)
+ if name in richcmp_special_methods:
+ if self.lookup_here('__richcmp__'):
+ error(pos, "Cannot define both % and __richcmp__" % name)
+ elif name == '__richcmp__':
+ for n in richcmp_special_methods:
+ if self.lookup_here(n):
+ error(pos, "Cannot define both % and __richcmp__" % n)
if name == "__new__":
error(pos, "__new__ method of extension type will change semantics "
"in a future version of Pyrex and Cython. Use __cinit__ instead.")
@@ -2107,7 +2245,9 @@
# TODO(robertwb): Make this an error.
warning(pos,
"Compatible but non-identical C method '%s' not redeclared "
- "in definition part of extension type '%s'. This may cause incorrect vtables to be generated." % (name, self.class_name), 2)
+ "in definition part of extension type '%s'. "
+ "This may cause incorrect vtables to be generated." % (
+ name, self.class_name), 2)
warning(entry.pos, "Previous declaration is here", 2)
entry = self.add_cfunction(name, type, pos, cname, visibility='ignore', modifiers=modifiers)
else:
@@ -2134,11 +2274,11 @@
return entry
- def add_cfunction(self, name, type, pos, cname, visibility, modifiers):
+ def add_cfunction(self, name, type, pos, cname, visibility, modifiers, inherited=False):
# Add a cfunction entry without giving it a func_cname.
prev_entry = self.lookup_here(name)
entry = ClassScope.add_cfunction(self, name, type, pos, cname,
- visibility, modifiers)
+ visibility, modifiers, inherited=inherited)
entry.is_cmethod = 1
entry.prev_entry = prev_entry
return entry
@@ -2199,7 +2339,7 @@
cname = adapt(cname)
entry = self.add_cfunction(base_entry.name, base_entry.type,
base_entry.pos, cname,
- base_entry.visibility, base_entry.func_modifiers)
+ base_entry.visibility, base_entry.func_modifiers, inherited=True)
entry.is_inherited = 1
if base_entry.is_final_cmethod:
entry.is_final_cmethod = True
@@ -2234,8 +2374,7 @@
def declare_var(self, name, type, pos,
cname = None, visibility = 'extern',
- api = 0, in_pxd = 0, is_cdef = 0,
- allow_pyobject = 0, defining = 0):
+ api = 0, in_pxd = 0, is_cdef = 0, defining = 0):
# Add an entry for an attribute.
if not cname:
cname = name
@@ -2244,6 +2383,8 @@
if entry.type.same_as(type):
# Fix with_gil vs nogil.
entry.type = entry.type.with_with_gil(type.with_gil)
+ elif type.is_cfunction and type.compatible_signature_with(entry.type):
+ entry.type = type
else:
error(pos, "Function signature does not match previous declaration")
else:
@@ -2254,22 +2395,36 @@
entry.func_cname = "%s::%s" % (self.type.empty_declaration_code(), cname)
if name != "this" and (defining or name != ""):
self.var_entries.append(entry)
- if type.is_pyobject and not allow_pyobject:
- error(pos,
- "C++ class member cannot be a Python object")
return entry
def declare_cfunction(self, name, type, pos,
cname=None, visibility='extern', api=0, in_pxd=0,
defining=0, modifiers=(), utility_code=None, overridable=False):
- if name in (self.name.split('::')[-1], '__init__') and cname is None:
- cname = self.type.cname
+ class_name = self.name.split('::')[-1]
+ if name in (class_name, '__init__') and cname is None:
+ cname = "%s__init__%s" % (Naming.func_prefix, class_name)
name = ''
- type.return_type = PyrexTypes.InvisibleVoidType()
+ type.return_type = PyrexTypes.CVoidType()
+ # This is called by the actual constructor, but need to support
+ # arguments that cannot by called by value.
+ type.original_args = type.args
+ def maybe_ref(arg):
+ if arg.type.is_cpp_class and not arg.type.is_reference:
+ return PyrexTypes.CFuncTypeArg(
+ arg.name, PyrexTypes.c_ref_type(arg.type), arg.pos)
+ else:
+ return arg
+ type.args = [maybe_ref(arg) for arg in type.args]
elif name == '__dealloc__' and cname is None:
- cname = "~%s" % self.type.cname
+ cname = "%s__dealloc__%s" % (Naming.func_prefix, class_name)
name = ''
- type.return_type = PyrexTypes.InvisibleVoidType()
+ type.return_type = PyrexTypes.CVoidType()
+ if name in ('', '') and type.nogil:
+ for base in self.type.base_classes:
+ base_entry = base.scope.lookup(name)
+ if base_entry and not base_entry.type.nogil:
+ error(pos, "Constructor cannot be called without GIL unless all base constructors can also be called without GIL")
+ error(base_entry.pos, "Base constructor defined here.")
prev_entry = self.lookup_here(name)
entry = self.declare_var(name, type, pos,
defining=defining,
@@ -2294,8 +2449,8 @@
# to work with this type.
for base_entry in \
base_scope.inherited_var_entries + base_scope.var_entries:
- #contructor is not inherited
- if base_entry.name == "":
+ #constructor/destructor is not inherited
+ if base_entry.name in ("", ""):
continue
#print base_entry.name, self.entries
if base_entry.name in self.entries:
@@ -2303,6 +2458,7 @@
entry = self.declare(base_entry.name, base_entry.cname,
base_entry.type, None, 'extern')
entry.is_variable = 1
+ entry.is_inherited = 1
self.inherited_var_entries.append(entry)
for base_entry in base_scope.cfunc_entries:
entry = self.declare_cfunction(base_entry.name, base_entry.type,
diff -Nru cython-0.26.1/Cython/Compiler/Tests/TestTreeFragment.py cython-0.29.14/Cython/Compiler/Tests/TestTreeFragment.py
--- cython-0.26.1/Cython/Compiler/Tests/TestTreeFragment.py 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Tests/TestTreeFragment.py 2018-09-22 14:18:56.000000000 +0000
@@ -45,7 +45,7 @@
T = F.substitute({"v" : NameNode(pos=None, name="a")})
v = F.root.stats[1].rhs.operand2.operand1
a = T.stats[1].rhs.operand2.operand1
- self.assertEquals(v.pos, a.pos)
+ self.assertEqual(v.pos, a.pos)
def test_temps(self):
TemplateTransform.temp_name_counter = 0
diff -Nru cython-0.26.1/Cython/Compiler/Tests/TestTreePath.py cython-0.29.14/Cython/Compiler/Tests/TestTreePath.py
--- cython-0.26.1/Cython/Compiler/Tests/TestTreePath.py 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Tests/TestTreePath.py 2018-09-22 14:18:56.000000000 +0000
@@ -20,75 +20,75 @@
def test_node_path(self):
t = self._build_tree()
- self.assertEquals(2, len(find_all(t, "//DefNode")))
- self.assertEquals(2, len(find_all(t, "//NameNode")))
- self.assertEquals(1, len(find_all(t, "//ReturnStatNode")))
- self.assertEquals(1, len(find_all(t, "//DefNode//ReturnStatNode")))
+ self.assertEqual(2, len(find_all(t, "//DefNode")))
+ self.assertEqual(2, len(find_all(t, "//NameNode")))
+ self.assertEqual(1, len(find_all(t, "//ReturnStatNode")))
+ self.assertEqual(1, len(find_all(t, "//DefNode//ReturnStatNode")))
def test_node_path_star(self):
t = self._build_tree()
- self.assertEquals(10, len(find_all(t, "//*")))
- self.assertEquals(8, len(find_all(t, "//DefNode//*")))
- self.assertEquals(0, len(find_all(t, "//NameNode//*")))
+ self.assertEqual(10, len(find_all(t, "//*")))
+ self.assertEqual(8, len(find_all(t, "//DefNode//*")))
+ self.assertEqual(0, len(find_all(t, "//NameNode//*")))
def test_node_path_attribute(self):
t = self._build_tree()
- self.assertEquals(2, len(find_all(t, "//NameNode/@name")))
- self.assertEquals(['fun', 'decorator'], find_all(t, "//NameNode/@name"))
+ self.assertEqual(2, len(find_all(t, "//NameNode/@name")))
+ self.assertEqual(['fun', 'decorator'], find_all(t, "//NameNode/@name"))
def test_node_path_attribute_dotted(self):
t = self._build_tree()
- self.assertEquals(1, len(find_all(t, "//ReturnStatNode/@value.name")))
- self.assertEquals(['fun'], find_all(t, "//ReturnStatNode/@value.name"))
+ self.assertEqual(1, len(find_all(t, "//ReturnStatNode/@value.name")))
+ self.assertEqual(['fun'], find_all(t, "//ReturnStatNode/@value.name"))
def test_node_path_child(self):
t = self._build_tree()
- self.assertEquals(1, len(find_all(t, "//DefNode/ReturnStatNode/NameNode")))
- self.assertEquals(1, len(find_all(t, "//ReturnStatNode/NameNode")))
+ self.assertEqual(1, len(find_all(t, "//DefNode/ReturnStatNode/NameNode")))
+ self.assertEqual(1, len(find_all(t, "//ReturnStatNode/NameNode")))
def test_node_path_node_predicate(self):
t = self._build_tree()
- self.assertEquals(0, len(find_all(t, "//DefNode[.//ForInStatNode]")))
- self.assertEquals(2, len(find_all(t, "//DefNode[.//NameNode]")))
- self.assertEquals(1, len(find_all(t, "//ReturnStatNode[./NameNode]")))
- self.assertEquals(Nodes.ReturnStatNode,
- type(find_first(t, "//ReturnStatNode[./NameNode]")))
+ self.assertEqual(0, len(find_all(t, "//DefNode[.//ForInStatNode]")))
+ self.assertEqual(2, len(find_all(t, "//DefNode[.//NameNode]")))
+ self.assertEqual(1, len(find_all(t, "//ReturnStatNode[./NameNode]")))
+ self.assertEqual(Nodes.ReturnStatNode,
+ type(find_first(t, "//ReturnStatNode[./NameNode]")))
def test_node_path_node_predicate_step(self):
t = self._build_tree()
- self.assertEquals(2, len(find_all(t, "//DefNode[.//NameNode]")))
- self.assertEquals(8, len(find_all(t, "//DefNode[.//NameNode]//*")))
- self.assertEquals(1, len(find_all(t, "//DefNode[.//NameNode]//ReturnStatNode")))
- self.assertEquals(Nodes.ReturnStatNode,
- type(find_first(t, "//DefNode[.//NameNode]//ReturnStatNode")))
+ self.assertEqual(2, len(find_all(t, "//DefNode[.//NameNode]")))
+ self.assertEqual(8, len(find_all(t, "//DefNode[.//NameNode]//*")))
+ self.assertEqual(1, len(find_all(t, "//DefNode[.//NameNode]//ReturnStatNode")))
+ self.assertEqual(Nodes.ReturnStatNode,
+ type(find_first(t, "//DefNode[.//NameNode]//ReturnStatNode")))
def test_node_path_attribute_exists(self):
t = self._build_tree()
- self.assertEquals(2, len(find_all(t, "//NameNode[@name]")))
- self.assertEquals(ExprNodes.NameNode,
- type(find_first(t, "//NameNode[@name]")))
+ self.assertEqual(2, len(find_all(t, "//NameNode[@name]")))
+ self.assertEqual(ExprNodes.NameNode,
+ type(find_first(t, "//NameNode[@name]")))
def test_node_path_attribute_exists_not(self):
t = self._build_tree()
- self.assertEquals(0, len(find_all(t, "//NameNode[not(@name)]")))
- self.assertEquals(2, len(find_all(t, "//NameNode[not(@honking)]")))
+ self.assertEqual(0, len(find_all(t, "//NameNode[not(@name)]")))
+ self.assertEqual(2, len(find_all(t, "//NameNode[not(@honking)]")))
def test_node_path_and(self):
t = self._build_tree()
- self.assertEquals(1, len(find_all(t, "//DefNode[.//ReturnStatNode and .//NameNode]")))
- self.assertEquals(0, len(find_all(t, "//NameNode[@honking and @name]")))
- self.assertEquals(0, len(find_all(t, "//NameNode[@name and @honking]")))
- self.assertEquals(2, len(find_all(t, "//DefNode[.//NameNode[@name] and @name]")))
+ self.assertEqual(1, len(find_all(t, "//DefNode[.//ReturnStatNode and .//NameNode]")))
+ self.assertEqual(0, len(find_all(t, "//NameNode[@honking and @name]")))
+ self.assertEqual(0, len(find_all(t, "//NameNode[@name and @honking]")))
+ self.assertEqual(2, len(find_all(t, "//DefNode[.//NameNode[@name] and @name]")))
def test_node_path_attribute_string_predicate(self):
t = self._build_tree()
- self.assertEquals(1, len(find_all(t, "//NameNode[@name = 'decorator']")))
+ self.assertEqual(1, len(find_all(t, "//NameNode[@name = 'decorator']")))
def test_node_path_recursive_predicate(self):
t = self._build_tree()
- self.assertEquals(2, len(find_all(t, "//DefNode[.//NameNode[@name]]")))
- self.assertEquals(1, len(find_all(t, "//DefNode[.//NameNode[@name = 'decorator']]")))
- self.assertEquals(1, len(find_all(t, "//DefNode[.//ReturnStatNode[./NameNode[@name = 'fun']]/NameNode]")))
+ self.assertEqual(2, len(find_all(t, "//DefNode[.//NameNode[@name]]")))
+ self.assertEqual(1, len(find_all(t, "//DefNode[.//NameNode[@name = 'decorator']]")))
+ self.assertEqual(1, len(find_all(t, "//DefNode[.//ReturnStatNode[./NameNode[@name = 'fun']]/NameNode]")))
if __name__ == '__main__':
unittest.main()
diff -Nru cython-0.26.1/Cython/Compiler/Tests/TestUtilityLoad.py cython-0.29.14/Cython/Compiler/Tests/TestUtilityLoad.py
--- cython-0.26.1/Cython/Compiler/Tests/TestUtilityLoad.py 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Tests/TestUtilityLoad.py 2018-09-22 14:18:56.000000000 +0000
@@ -23,27 +23,27 @@
def test_load_as_string(self):
got = strip_2tup(self.cls.load_as_string(self.name))
- self.assertEquals(got, self.expected)
+ self.assertEqual(got, self.expected)
got = strip_2tup(self.cls.load_as_string(self.name, self.filename))
- self.assertEquals(got, self.expected)
+ self.assertEqual(got, self.expected)
def test_load(self):
utility = self.cls.load(self.name)
got = strip_2tup((utility.proto, utility.impl))
- self.assertEquals(got, self.expected)
+ self.assertEqual(got, self.expected)
required, = utility.requires
got = strip_2tup((required.proto, required.impl))
- self.assertEquals(got, self.required)
+ self.assertEqual(got, self.required)
utility = self.cls.load(self.name, from_file=self.filename)
got = strip_2tup((utility.proto, utility.impl))
- self.assertEquals(got, self.expected)
+ self.assertEqual(got, self.expected)
utility = self.cls.load_cached(self.name, from_file=self.filename)
got = strip_2tup((utility.proto, utility.impl))
- self.assertEquals(got, self.expected)
+ self.assertEqual(got, self.expected)
class TestTempitaUtilityLoader(TestUtilityLoader):
@@ -60,20 +60,20 @@
def test_load_as_string(self):
got = strip_2tup(self.cls.load_as_string(self.name, context=self.context))
- self.assertEquals(got, self.expected_tempita)
+ self.assertEqual(got, self.expected_tempita)
def test_load(self):
utility = self.cls.load(self.name, context=self.context)
got = strip_2tup((utility.proto, utility.impl))
- self.assertEquals(got, self.expected_tempita)
+ self.assertEqual(got, self.expected_tempita)
required, = utility.requires
got = strip_2tup((required.proto, required.impl))
- self.assertEquals(got, self.required_tempita)
+ self.assertEqual(got, self.required_tempita)
utility = self.cls.load(self.name, from_file=self.filename, context=self.context)
got = strip_2tup((utility.proto, utility.impl))
- self.assertEquals(got, self.expected_tempita)
+ self.assertEqual(got, self.expected_tempita)
class TestCythonUtilityLoader(TestTempitaUtilityLoader):
diff -Nru cython-0.26.1/Cython/Compiler/TreeFragment.py cython-0.29.14/Cython/Compiler/TreeFragment.py
--- cython-0.26.1/Cython/Compiler/TreeFragment.py 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/TreeFragment.py 2018-11-24 09:20:06.000000000 +0000
@@ -24,13 +24,13 @@
class StringParseContext(Main.Context):
- def __init__(self, name, include_directories=None, compiler_directives=None):
+ def __init__(self, name, include_directories=None, compiler_directives=None, cpp=False):
if include_directories is None:
include_directories = []
if compiler_directives is None:
compiler_directives = {}
- Main.Context.__init__(self, include_directories, compiler_directives,
- create_testscope=False)
+ # TODO: see if "language_level=3" also works for our internal code here.
+ Main.Context.__init__(self, include_directories, compiler_directives, cpp=cpp, language_level=2)
self.module_name = name
def find_module(self, module_name, relative_to=None, pos=None, need_pxd=1, absolute_fallback=True):
@@ -209,8 +209,9 @@
"""Strips empty lines and common indentation from the list of strings given in lines"""
# TODO: Facilitate textwrap.indent instead
lines = [x for x in lines if x.strip() != u""]
- minindent = min([len(_match_indent(x).group(0)) for x in lines])
- lines = [x[minindent:] for x in lines]
+ if lines:
+ minindent = min([len(_match_indent(x).group(0)) for x in lines])
+ lines = [x[minindent:] for x in lines]
return lines
diff -Nru cython-0.26.1/Cython/Compiler/TreePath.py cython-0.29.14/Cython/Compiler/TreePath.py
--- cython-0.26.1/Cython/Compiler/TreePath.py 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/TreePath.py 2018-09-22 14:18:56.000000000 +0000
@@ -12,14 +12,14 @@
import operator
path_tokenizer = re.compile(
- "("
- "'[^']*'|\"[^\"]*\"|"
- "//?|"
- "\(\)|"
- "==?|"
- "[/.*\[\]\(\)@])|"
- "([^/\[\]\(\)@=\s]+)|"
- "\s+"
+ r"("
+ r"'[^']*'|\"[^\"]*\"|"
+ r"//?|"
+ r"\(\)|"
+ r"==?|"
+ r"[/.*\[\]()@])|"
+ r"([^/\[\]()@=\s]+)|"
+ r"\s+"
).findall
def iterchildren(node, attr_name):
@@ -180,6 +180,8 @@
return int(value)
except ValueError:
pass
+ elif token[1].isdigit():
+ return int(token[1])
else:
name = token[1].lower()
if name == 'true':
diff -Nru cython-0.26.1/Cython/Compiler/TypeInference.py cython-0.29.14/Cython/Compiler/TypeInference.py
--- cython-0.26.1/Cython/Compiler/TypeInference.py 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/TypeInference.py 2018-09-22 14:18:56.000000000 +0000
@@ -250,8 +250,7 @@
def visit_YieldExprNode(self, node):
if self.parallel_block_stack:
- error(node.pos, "Yield not allowed in parallel sections")
-
+ error(node.pos, "'%s' not allowed in parallel sections" % node.expr_keyword)
return node
def visit_ReturnStatNode(self, node):
@@ -307,6 +306,13 @@
else:
return self.visit_dangerous_node(node)
+ def visit_SimpleCallNode(self, node):
+ if node.function.is_name and node.function.name == 'abs':
+ # Overflows for minimum value of fixed size ints.
+ return self.visit_dangerous_node(node)
+ else:
+ return self.visit_neutral_node(node)
+
visit_UnopNode = visit_neutral_node
visit_UnaryMinusNode = visit_dangerous_node
@@ -372,7 +378,7 @@
self.set_entry_type(entry, py_object_type)
return
- # Set of assignemnts
+ # Set of assignments
assignments = set()
assmts_resolved = set()
dependencies = {}
@@ -409,6 +415,24 @@
entry = node.entry
return spanning_type(types, entry.might_overflow, entry.pos, scope)
+ def inferred_types(entry):
+ has_none = False
+ has_pyobjects = False
+ types = []
+ for assmt in entry.cf_assignments:
+ if assmt.rhs.is_none:
+ has_none = True
+ else:
+ rhs_type = assmt.inferred_type
+ if rhs_type and rhs_type.is_pyobject:
+ has_pyobjects = True
+ types.append(rhs_type)
+ # Ignore None assignments as long as there are concrete Python type assignments.
+ # but include them if None is the only assigned Python object.
+ if has_none and not has_pyobjects:
+ types.append(py_object_type)
+ return types
+
def resolve_assignments(assignments):
resolved = set()
for assmt in assignments:
@@ -461,7 +485,7 @@
continue
entry_type = py_object_type
if assmts_resolved.issuperset(entry.cf_assignments):
- types = [assmt.inferred_type for assmt in entry.cf_assignments]
+ types = inferred_types(entry)
if types and all(types):
entry_type = spanning_type(
types, entry.might_overflow, entry.pos, scope)
@@ -471,8 +495,9 @@
def reinfer():
dirty = False
for entry in inferred:
- types = [assmt.infer_type()
- for assmt in entry.cf_assignments]
+ for assmt in entry.cf_assignments:
+ assmt.infer_type()
+ types = inferred_types(entry)
new_type = spanning_type(types, entry.might_overflow, entry.pos, scope)
if new_type != entry.type:
self.set_entry_type(entry, new_type)
@@ -538,6 +563,8 @@
# find_spanning_type() only returns 'bint' for clean boolean
# operations without other int types, so this is safe, too
return result_type
+ elif result_type.is_pythran_expr:
+ return result_type
elif result_type.is_ptr:
# Any pointer except (signed|unsigned|) char* can't implicitly
# become a PyObject, and inferring char* is now accepted, too.
diff -Nru cython-0.26.1/Cython/Compiler/TypeSlots.py cython-0.29.14/Cython/Compiler/TypeSlots.py
--- cython-0.26.1/Cython/Compiler/TypeSlots.py 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/TypeSlots.py 2019-11-01 14:13:39.000000000 +0000
@@ -12,6 +12,8 @@
invisible = ['__cinit__', '__dealloc__', '__richcmp__',
'__nonzero__', '__bool__']
+richcmp_special_methods = ['__eq__', '__ne__', '__lt__', '__gt__', '__le__', '__ge__']
+
class Signature(object):
# Method slot signature descriptor.
@@ -303,11 +305,11 @@
def slot_code(self, scope):
entry = scope.lookup_here(self.method_name)
- if entry and entry.func_cname:
+ if entry and entry.is_special and entry.func_cname:
return entry.func_cname
for method_name in self.alternatives:
entry = scope.lookup_here(method_name)
- if entry and entry.func_cname:
+ if entry and entry.is_special and entry.func_cname:
return entry.func_cname
return "0"
@@ -363,12 +365,13 @@
self.method = method
def slot_code(self, scope):
+ entry = scope.lookup_here(self.method)
if (self.slot_name != 'tp_new'
and scope.parent_type.base_type
and not scope.has_pyobject_attrs
and not scope.has_memoryview_attrs
and not scope.has_cpp_class_attrs
- and not scope.lookup_here(self.method)):
+ and not (entry and entry.is_special)):
# if the type does not have object attributes, it can
# delegate GC methods to its parent - iff the parent
# functions are defined in the same module
@@ -377,6 +380,8 @@
entry = scope.parent_scope.lookup_here(scope.parent_type.base_type.name)
if entry.visibility != 'extern':
return self.slot_code(parent_type_scope)
+ if entry and not entry.is_special:
+ return "0"
return InternalMethodSlot.slot_code(self, scope)
@@ -394,12 +399,23 @@
self.default_value = default_value
def slot_code(self, scope):
- if scope.defines_any(self.user_methods):
+ if scope.defines_any_special(self.user_methods):
return InternalMethodSlot.slot_code(self, scope)
else:
return self.default_value
+class RichcmpSlot(MethodSlot):
+ def slot_code(self, scope):
+ entry = scope.lookup_here(self.method_name)
+ if entry and entry.is_special and entry.func_cname:
+ return entry.func_cname
+ elif scope.defines_any_special(richcmp_special_methods):
+ return scope.mangle_internal(self.slot_name)
+ else:
+ return "0"
+
+
class TypeFlagsSlot(SlotDescriptor):
# Descriptor for the type flags slot.
@@ -559,6 +575,8 @@
slot = method_name_to_slot.get(name)
if slot:
return slot.signature
+ elif name in richcmp_special_methods:
+ return ibinaryfunc
else:
return None
@@ -594,6 +612,20 @@
return slot_code
return None
+
+def get_slot_by_name(slot_name):
+ # For now, only search the type struct, no referenced sub-structs.
+ for slot in slot_table:
+ if slot.slot_name == slot_name:
+ return slot
+ assert False, "Slot not found: %s" % slot_name
+
+
+def get_slot_code_by_name(scope, slot_name):
+ slot = get_slot_by_name(slot_name)
+ return slot.slot_code(scope)
+
+
#------------------------------------------------------------------------------------------
#
# Signatures for generic Python functions and methods.
@@ -660,8 +692,7 @@
cmpfunc = Signature("TO", "i") # typedef int (*cmpfunc)(PyObject *, PyObject *);
reprfunc = Signature("T", "O") # typedef PyObject *(*reprfunc)(PyObject *);
hashfunc = Signature("T", "h") # typedef Py_hash_t (*hashfunc)(PyObject *);
- # typedef PyObject *(*richcmpfunc) (PyObject *, PyObject *, int);
-richcmpfunc = Signature("OOi", "O") # typedef PyObject *(*richcmpfunc) (PyObject *, PyObject *, int);
+richcmpfunc = Signature("TOi", "O") # typedef PyObject *(*richcmpfunc) (PyObject *, PyObject *, int);
getiterfunc = Signature("T", "O") # typedef PyObject *(*getiterfunc) (PyObject *);
iternextfunc = Signature("T", "O") # typedef PyObject *(*iternextfunc) (PyObject *);
descrgetfunc = Signature("TOO", "O") # typedef PyObject *(*descrgetfunc) (PyObject *, PyObject *, PyObject *);
@@ -794,7 +825,8 @@
slot_table = (
ConstructorSlot("tp_dealloc", '__dealloc__'),
- EmptySlot("tp_print"), #MethodSlot(printfunc, "tp_print", "__print__"),
+ EmptySlot("tp_print", ifdef="PY_VERSION_HEX < 0x030800b4"),
+ EmptySlot("tp_vectorcall_offset", ifdef="PY_VERSION_HEX >= 0x030800b4"),
EmptySlot("tp_getattr"),
EmptySlot("tp_setattr"),
@@ -823,8 +855,7 @@
GCDependentSlot("tp_traverse"),
GCClearReferencesSlot("tp_clear"),
- # Later -- synthesize a method to split into separate ops?
- MethodSlot(richcmpfunc, "tp_richcompare", "__richcmp__", inherited=False), # Py3 checks for __hash__
+ RichcmpSlot(richcmpfunc, "tp_richcompare", "__richcmp__", inherited=False), # Py3 checks for __hash__
EmptySlot("tp_weaklistoffset"),
@@ -857,6 +888,8 @@
EmptySlot("tp_del"),
EmptySlot("tp_version_tag"),
EmptySlot("tp_finalize", ifdef="PY_VERSION_HEX >= 0x030400a1"),
+ EmptySlot("tp_vectorcall", ifdef="PY_VERSION_HEX >= 0x030800b1"),
+ EmptySlot("tp_print", ifdef="PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000"),
)
#------------------------------------------------------------------------------------------
@@ -874,6 +907,7 @@
MethodSlot(ssizessizeobjargproc, "", "__setslice__")
MethodSlot(ssizessizeargproc, "", "__delslice__")
MethodSlot(getattrofunc, "", "__getattr__")
+MethodSlot(getattrofunc, "", "__getattribute__")
MethodSlot(setattrofunc, "", "__setattr__")
MethodSlot(delattrofunc, "", "__delattr__")
MethodSlot(descrgetfunc, "", "__get__")
diff -Nru cython-0.26.1/Cython/Compiler/UtilityCode.py cython-0.29.14/Cython/Compiler/UtilityCode.py
--- cython-0.26.1/Cython/Compiler/UtilityCode.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/UtilityCode.py 2018-09-22 14:18:56.000000000 +0000
@@ -8,11 +8,10 @@
class NonManglingModuleScope(Symtab.ModuleScope):
- cpp = False
-
def __init__(self, prefix, *args, **kw):
self.prefix = prefix
self.cython_scope = None
+ self.cpp = kw.pop('cpp', False)
Symtab.ModuleScope.__init__(self, *args, **kw)
def add_imported_entry(self, name, entry, pos):
@@ -44,7 +43,7 @@
if self.scope is None:
self.scope = NonManglingModuleScope(
- self.prefix, module_name, parent_module=None, context=self)
+ self.prefix, module_name, parent_module=None, context=self, cpp=self.cpp)
return self.scope
@@ -119,7 +118,8 @@
from . import Pipeline, ParseTreeTransforms
context = CythonUtilityCodeContext(
- self.name, compiler_directives=self.compiler_directives)
+ self.name, compiler_directives=self.compiler_directives,
+ cpp=cython_scope.is_cpp() if cython_scope else False)
context.prefix = self.prefix
context.cython_scope = cython_scope
#context = StringParseContext(self.name)
@@ -223,7 +223,7 @@
for dep in self.requires:
if dep.is_cython_utility:
- dep.declare_in_scope(dest_scope)
+ dep.declare_in_scope(dest_scope, cython_scope=cython_scope)
return original_scope
diff -Nru cython-0.26.1/Cython/Compiler/UtilNodes.py cython-0.29.14/Cython/Compiler/UtilNodes.py
--- cython-0.26.1/Cython/Compiler/UtilNodes.py 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/UtilNodes.py 2018-09-22 14:18:56.000000000 +0000
@@ -1,7 +1,7 @@
#
# Nodes used as utilities and support for transforms etc.
# These often make up sets including both Nodes and ExprNodes
-# so it is convenient to have them in a seperate module.
+# so it is convenient to have them in a separate module.
#
from __future__ import absolute_import
@@ -267,6 +267,9 @@
def infer_type(self, env):
return self.subexpression.infer_type(env)
+ def may_be_none(self):
+ return self.subexpression.may_be_none()
+
def result(self):
return self.subexpression.result()
diff -Nru cython-0.26.1/Cython/Compiler/Visitor.pxd cython-0.29.14/Cython/Compiler/Visitor.pxd
--- cython-0.26.1/Cython/Compiler/Visitor.pxd 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Visitor.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -12,10 +12,13 @@
cdef _visitchild(self, child, parent, attrname, idx)
cdef dict _visitchildren(self, parent, attrs)
cpdef visitchildren(self, parent, attrs=*)
+ cdef _raise_compiler_error(self, child, e)
cdef class VisitorTransform(TreeVisitor):
- cpdef visitchildren(self, parent, attrs=*)
- cpdef recurse_to_children(self, node)
+ cdef dict _process_children(self, parent, attrs=*)
+ cpdef visitchildren(self, parent, attrs=*, exclude=*)
+ cdef list _flatten_list(self, list orig_list)
+ cdef list _select_attrs(self, attrs, exclude)
cdef class CythonTransform(VisitorTransform):
cdef public context
diff -Nru cython-0.26.1/Cython/Compiler/Visitor.py cython-0.29.14/Cython/Compiler/Visitor.py
--- cython-0.26.1/Cython/Compiler/Visitor.py 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Compiler/Visitor.py 2018-11-24 09:20:06.000000000 +0000
@@ -1,4 +1,6 @@
# cython: infer_types=True
+# cython: language_level=3
+# cython: auto_pickle=False
#
# Tree visitor and transform framework
@@ -76,7 +78,7 @@
self.dispatch_table = {}
self.access_path = []
- def dump_node(self, node, indent=0):
+ def dump_node(self, node):
ignored = list(node.child_attrs or []) + [
u'child_attrs', u'pos', u'gil_message', u'cpp_message', u'subexprs']
values = []
@@ -88,7 +90,6 @@
source = os.path.basename(source.get_description())
values.append(u'%s:%s:%s' % (source, pos[1], pos[2]))
attribute_names = dir(node)
- attribute_names.sort()
for attr in attribute_names:
if attr in ignored:
continue
@@ -154,7 +155,6 @@
cls = type(obj)
pattern = "visit_%s"
mro = inspect.getmro(cls)
- handler_method = None
for mro_cls in mro:
handler_method = getattr(self, pattern % mro_cls.__name__, None)
if handler_method is not None:
@@ -244,30 +244,46 @@
was not, an exception will be raised. (Typically you want to ensure that you
are within a StatListNode or similar before doing this.)
"""
- def visitchildren(self, parent, attrs=None):
+ def visitchildren(self, parent, attrs=None, exclude=None):
+ # generic def entry point for calls from Python subclasses
+ if exclude is not None:
+ attrs = self._select_attrs(parent.child_attrs if attrs is None else attrs, exclude)
+ return self._process_children(parent, attrs)
+
+ @cython.final
+ def _select_attrs(self, attrs, exclude):
+ return [name for name in attrs if name not in exclude]
+
+ @cython.final
+ def _process_children(self, parent, attrs=None):
+ # fast cdef entry point for calls from Cython subclasses
result = self._visitchildren(parent, attrs)
for attr, newnode in result.items():
- if type(newnode) is not list:
- setattr(parent, attr, newnode)
- else:
- # Flatten the list one level and remove any None
- newlist = []
- for x in newnode:
- if x is not None:
- if type(x) is list:
- newlist += x
- else:
- newlist.append(x)
- setattr(parent, attr, newlist)
+ if type(newnode) is list:
+ newnode = self._flatten_list(newnode)
+ setattr(parent, attr, newnode)
return result
+ @cython.final
+ def _flatten_list(self, orig_list):
+ # Flatten the list one level and remove any None
+ newlist = []
+ for x in orig_list:
+ if x is not None:
+ if type(x) is list:
+ newlist.extend(x)
+ else:
+ newlist.append(x)
+ return newlist
+
def recurse_to_children(self, node):
- self.visitchildren(node)
+ self._process_children(node)
return node
def __call__(self, root):
return self._visit(root)
+
class CythonTransform(VisitorTransform):
"""
Certain common conventions and utilities for Cython transforms.
@@ -288,14 +304,15 @@
def visit_CompilerDirectivesNode(self, node):
old = self.current_directives
self.current_directives = node.directives
- self.visitchildren(node)
+ self._process_children(node)
self.current_directives = old
return node
def visit_Node(self, node):
- self.visitchildren(node)
+ self._process_children(node)
return node
+
class ScopeTrackingTransform(CythonTransform):
# Keeps track of type of scopes
#scope_type: can be either of 'module', 'function', 'cclass', 'pyclass', 'struct'
@@ -304,14 +321,14 @@
def visit_ModuleNode(self, node):
self.scope_type = 'module'
self.scope_node = node
- self.visitchildren(node)
+ self._process_children(node)
return node
def visit_scope(self, node, scope_type):
prev = self.scope_type, self.scope_node
self.scope_type = scope_type
self.scope_node = node
- self.visitchildren(node)
+ self._process_children(node)
self.scope_type, self.scope_node = prev
return node
@@ -354,45 +371,45 @@
def visit_FuncDefNode(self, node):
self.enter_scope(node, node.local_scope)
- self.visitchildren(node)
+ self._process_children(node)
self.exit_scope()
return node
def visit_GeneratorBodyDefNode(self, node):
- self.visitchildren(node)
+ self._process_children(node)
return node
def visit_ClassDefNode(self, node):
self.enter_scope(node, node.scope)
- self.visitchildren(node)
+ self._process_children(node)
self.exit_scope()
return node
def visit_CStructOrUnionDefNode(self, node):
self.enter_scope(node, node.scope)
- self.visitchildren(node)
+ self._process_children(node)
self.exit_scope()
return node
def visit_ScopedExprNode(self, node):
if node.expr_scope:
self.enter_scope(node, node.expr_scope)
- self.visitchildren(node)
+ self._process_children(node)
self.exit_scope()
else:
- self.visitchildren(node)
+ self._process_children(node)
return node
def visit_CArgDeclNode(self, node):
# default arguments are evaluated in the outer scope
if node.default:
attrs = [attr for attr in node.child_attrs if attr != 'default']
- self.visitchildren(node, attrs)
+ self._process_children(node, attrs)
self.enter_scope(node, self.current_env().outer_scope)
self.visitchildren(node, ('default',))
self.exit_scope()
else:
- self.visitchildren(node)
+ self._process_children(node)
return node
@@ -477,7 +494,7 @@
"""
# only visit call nodes and Python operations
def visit_GeneralCallNode(self, node):
- self.visitchildren(node)
+ self._process_children(node)
function = node.function
if not function.type.is_pyobject:
return node
@@ -492,7 +509,7 @@
return self._dispatch_to_handler(node, function, args, keyword_args)
def visit_SimpleCallNode(self, node):
- self.visitchildren(node)
+ self._process_children(node)
function = node.function
if function.type.is_pyobject:
arg_tuple = node.arg_tuple
@@ -506,7 +523,7 @@
def visit_PrimaryCmpNode(self, node):
if node.cascade:
# not currently handled below
- self.visitchildren(node)
+ self._process_children(node)
return node
return self._visit_binop_node(node)
@@ -514,7 +531,7 @@
return self._visit_binop_node(node)
def _visit_binop_node(self, node):
- self.visitchildren(node)
+ self._process_children(node)
# FIXME: could special case 'not_in'
special_method_name = find_special_method_for_binary_operator(node.operator)
if special_method_name:
@@ -535,7 +552,7 @@
return node
def visit_UnopNode(self, node):
- self.visitchildren(node)
+ self._process_children(node)
special_method_name = find_special_method_for_unary_operator(node.operator)
if special_method_name:
operand = node.operand
@@ -581,15 +598,23 @@
# into a C function call (defined in the builtin scope)
if not function.entry:
return node
+ entry = function.entry
is_builtin = (
- function.entry.is_builtin or
- function.entry is self.current_env().builtin_scope().lookup_here(function.name))
+ entry.is_builtin or
+ entry is self.current_env().builtin_scope().lookup_here(function.name))
if not is_builtin:
if function.cf_state and function.cf_state.is_single:
# we know the value of the variable
# => see if it's usable instead
return self._delegate_to_assigned_value(
node, function, arg_list, kwargs)
+ if arg_list and entry.is_cmethod and entry.scope and entry.scope.parent_type.is_builtin_type:
+ if entry.scope.parent_type is arg_list[0].type:
+ # Optimised (unbound) method of a builtin type => try to "de-optimise".
+ return self._dispatch_to_method_handler(
+ entry.name, self_arg=None, is_unbound_method=True,
+ type_name=entry.scope.parent_type.name,
+ node=node, function=function, arg_list=arg_list, kwargs=kwargs)
return node
function_handler = self._find_handler(
"function_%s" % function.name, kwargs)
@@ -615,8 +640,7 @@
obj_type = self_arg.type
is_unbound_method = False
if obj_type.is_builtin_type:
- if (obj_type is Builtin.type_type and self_arg.is_name and
- arg_list and arg_list[0].type.is_pyobject):
+ if obj_type is Builtin.type_type and self_arg.is_name and arg_list and arg_list[0].type.is_pyobject:
# calling an unbound method like 'list.append(L,x)'
# (ignoring 'type.mro()' here ...)
type_name = self_arg.name
@@ -683,7 +707,7 @@
return node
def visit_Node(self, node):
- self.visitchildren(node)
+ self._process_children(node)
if node is self.orig_node:
return self.new_node
else:
diff -Nru cython-0.26.1/Cython/Coverage.py cython-0.29.14/Cython/Coverage.py
--- cython-0.26.1/Cython/Coverage.py 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Coverage.py 2019-11-01 14:13:39.000000000 +0000
@@ -12,6 +12,7 @@
from collections import defaultdict
from coverage.plugin import CoveragePlugin, FileTracer, FileReporter # requires coverage.py 4.0+
+from coverage.files import canonical_filename
from .Utils import find_root_package_dir, is_package_dir, open_source_file
@@ -19,30 +20,34 @@
from . import __version__
+C_FILE_EXTENSIONS = ['.c', '.cpp', '.cc', '.cxx']
+MODULE_FILE_EXTENSIONS = set(['.py', '.pyx', '.pxd'] + C_FILE_EXTENSIONS)
+
+
def _find_c_source(base_path):
- if os.path.exists(base_path + '.c'):
- c_file = base_path + '.c'
- elif os.path.exists(base_path + '.cpp'):
- c_file = base_path + '.cpp'
- else:
- c_file = None
- return c_file
+ file_exists = os.path.exists
+ for ext in C_FILE_EXTENSIONS:
+ file_name = base_path + ext
+ if file_exists(file_name):
+ return file_name
+ return None
-def _find_dep_file_path(main_file, file_path):
+def _find_dep_file_path(main_file, file_path, relative_path_search=False):
abs_path = os.path.abspath(file_path)
- if file_path.endswith('.pxi') and not os.path.exists(abs_path):
- # include files are looked up relative to the main source file
- pxi_file_path = os.path.join(os.path.dirname(main_file), file_path)
- if os.path.exists(pxi_file_path):
- abs_path = os.path.abspath(pxi_file_path)
+ if not os.path.exists(abs_path) and (file_path.endswith('.pxi') or
+ relative_path_search):
+ # files are looked up relative to the main source file
+ rel_file_path = os.path.join(os.path.dirname(main_file), file_path)
+ if os.path.exists(rel_file_path):
+ abs_path = os.path.abspath(rel_file_path)
# search sys.path for external locations if a valid file hasn't been found
if not os.path.exists(abs_path):
for sys_path in sys.path:
test_path = os.path.realpath(os.path.join(sys_path, file_path))
if os.path.exists(test_path):
- return test_path
- return abs_path
+ return canonical_filename(test_path)
+ return canonical_filename(abs_path)
class Plugin(CoveragePlugin):
@@ -63,14 +68,14 @@
if filename.startswith('<') or filename.startswith('memory:'):
return None
c_file = py_file = None
- filename = os.path.abspath(filename)
+ filename = canonical_filename(os.path.abspath(filename))
if self._c_files_map and filename in self._c_files_map:
c_file = self._c_files_map[filename][0]
if c_file is None:
c_file, py_file = self._find_source_files(filename)
if not c_file:
- return None
+ return None # unknown file
# parse all source file paths and lines from C file
# to learn about all relevant source files right away (pyx/pxi/pxd)
@@ -78,7 +83,9 @@
# is not from the main .pyx file but a file with a different
# name than the .c file (which prevents us from finding the
# .c file)
- self._parse_lines(c_file, filename)
+ _, code = self._read_source_lines(c_file, filename)
+ if code is None:
+ return None # no source found
if self._file_path_map is None:
self._file_path_map = {}
@@ -91,23 +98,31 @@
# from coverage.python import PythonFileReporter
# return PythonFileReporter(filename)
- filename = os.path.abspath(filename)
+ filename = canonical_filename(os.path.abspath(filename))
if self._c_files_map and filename in self._c_files_map:
c_file, rel_file_path, code = self._c_files_map[filename]
else:
c_file, _ = self._find_source_files(filename)
if not c_file:
return None # unknown file
- rel_file_path, code = self._parse_lines(c_file, filename)
+ rel_file_path, code = self._read_source_lines(c_file, filename)
+ if code is None:
+ return None # no source found
return CythonModuleReporter(c_file, filename, rel_file_path, code)
def _find_source_files(self, filename):
basename, ext = os.path.splitext(filename)
ext = ext.lower()
- if ext in ('.py', '.pyx', '.pxd', '.c', '.cpp'):
+ if ext in MODULE_FILE_EXTENSIONS:
pass
- elif ext in ('.so', '.pyd'):
- platform_suffix = re.search(r'[.]cpython-[0-9]+[a-z]*$', basename, re.I)
+ elif ext == '.pyd':
+ # Windows extension module
+ platform_suffix = re.search(r'[.]cp[0-9]+-win[_a-z0-9]*$', basename, re.I)
+ if platform_suffix:
+ basename = basename[:platform_suffix.start()]
+ elif ext == '.so':
+ # Linux/Unix/Mac extension module
+ platform_suffix = re.search(r'[.](?:cpython|pypy)-[0-9]+[-_a-z0-9]*$', basename, re.I)
if platform_suffix:
basename = basename[:platform_suffix.start()]
elif ext == '.pxi':
@@ -121,7 +136,7 @@
# none of our business
return None, None
- c_file = filename if ext in ('.c', '.cpp') else _find_c_source(basename)
+ c_file = filename if ext in C_FILE_EXTENSIONS else _find_c_source(basename)
if c_file is None:
# a module "pkg/mod.so" can have a source file "pkg/pkg.mod.c"
package_root = find_root_package_dir.uncached(filename)
@@ -155,15 +170,15 @@
splitext = os.path.splitext
for filename in os.listdir(dir_path):
ext = splitext(filename)[1].lower()
- if ext in ('.c', '.cpp'):
- self._parse_lines(os.path.join(dir_path, filename), source_file)
+ if ext in C_FILE_EXTENSIONS:
+ self._read_source_lines(os.path.join(dir_path, filename), source_file)
if source_file in self._c_files_map:
return
# not found? then try one package up
if is_package_dir(dir_path):
self._find_c_source_files(os.path.dirname(dir_path), source_file)
- def _parse_lines(self, c_file, sourcefile):
+ def _read_source_lines(self, c_file, sourcefile):
"""
Parse a Cython generated C/C++ source file and find the executable lines.
Each executable line starts with a comment header that states source file
@@ -174,52 +189,72 @@
if c_file in self._parsed_c_files:
code_lines = self._parsed_c_files[c_file]
else:
- match_source_path_line = re.compile(r' */[*] +"(.*)":([0-9]+)$').match
- match_current_code_line = re.compile(r' *[*] (.*) # <<<<<<+$').match
- match_comment_end = re.compile(r' *[*]/$').match
- not_executable = re.compile(
- r'\s*c(?:type)?def\s+'
- r'(?:(?:public|external)\s+)?'
- r'(?:struct|union|enum|class)'
- r'(\s+[^:]+|)\s*:'
- ).match
-
- code_lines = defaultdict(dict)
- filenames = set()
- with open(c_file) as lines:
- lines = iter(lines)
- for line in lines:
- match = match_source_path_line(line)
- if not match:
- continue
- filename, lineno = match.groups()
- filenames.add(filename)
- lineno = int(lineno)
- for comment_line in lines:
- match = match_current_code_line(comment_line)
- if match:
- code_line = match.group(1).rstrip()
- if not_executable(code_line):
- break
- code_lines[filename][lineno] = code_line
- break
- elif match_comment_end(comment_line):
- # unexpected comment format - false positive?
- break
-
+ code_lines = self._parse_cfile_lines(c_file)
self._parsed_c_files[c_file] = code_lines
if self._c_files_map is None:
self._c_files_map = {}
for filename, code in code_lines.items():
- abs_path = _find_dep_file_path(c_file, filename)
+ abs_path = _find_dep_file_path(c_file, filename,
+ relative_path_search=True)
self._c_files_map[abs_path] = (c_file, filename, code)
if sourcefile not in self._c_files_map:
return (None,) * 2 # e.g. shared library file
return self._c_files_map[sourcefile][1:]
+ def _parse_cfile_lines(self, c_file):
+ """
+ Parse a C file and extract all source file lines that generated executable code.
+ """
+ match_source_path_line = re.compile(r' */[*] +"(.*)":([0-9]+)$').match
+ match_current_code_line = re.compile(r' *[*] (.*) # <<<<<<+$').match
+ match_comment_end = re.compile(r' *[*]/$').match
+ match_trace_line = re.compile(r' *__Pyx_TraceLine\(([0-9]+),').match
+ not_executable = re.compile(
+ r'\s*c(?:type)?def\s+'
+ r'(?:(?:public|external)\s+)?'
+ r'(?:struct|union|enum|class)'
+ r'(\s+[^:]+|)\s*:'
+ ).match
+
+ code_lines = defaultdict(dict)
+ executable_lines = defaultdict(set)
+ current_filename = None
+
+ with open(c_file) as lines:
+ lines = iter(lines)
+ for line in lines:
+ match = match_source_path_line(line)
+ if not match:
+ if '__Pyx_TraceLine(' in line and current_filename is not None:
+ trace_line = match_trace_line(line)
+ if trace_line:
+ executable_lines[current_filename].add(int(trace_line.group(1)))
+ continue
+ filename, lineno = match.groups()
+ current_filename = filename
+ lineno = int(lineno)
+ for comment_line in lines:
+ match = match_current_code_line(comment_line)
+ if match:
+ code_line = match.group(1).rstrip()
+ if not_executable(code_line):
+ break
+ code_lines[filename][lineno] = code_line
+ break
+ elif match_comment_end(comment_line):
+ # unexpected comment format - false positive?
+ break
+
+ # Remove lines that generated code but are not traceable.
+ for filename, lines in code_lines.items():
+ dead_lines = set(lines).difference(executable_lines.get(filename, ()))
+ for lineno in dead_lines:
+ del lines[lineno]
+ return code_lines
+
class CythonModuleTracer(FileTracer):
"""
diff -Nru cython-0.26.1/Cython/Debugger/DebugWriter.py cython-0.29.14/Cython/Debugger/DebugWriter.py
--- cython-0.26.1/Cython/Debugger/DebugWriter.py 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Debugger/DebugWriter.py 2018-11-24 09:20:06.000000000 +0000
@@ -44,6 +44,10 @@
def end(self, name):
self.tb.end(name)
+ def add_entry(self, name, **attrs):
+ self.tb.start(name, attrs)
+ self.tb.end(name)
+
def serialize(self):
self.tb.end('Module')
self.tb.end('cython_debug')
diff -Nru cython-0.26.1/Cython/Debugger/libcython.py cython-0.29.14/Cython/Debugger/libcython.py
--- cython-0.26.1/Cython/Debugger/libcython.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Debugger/libcython.py 2018-09-22 14:18:56.000000000 +0000
@@ -488,7 +488,7 @@
class CyGDBError(gdb.GdbError):
"""
- Base class for Cython-command related erorrs
+ Base class for Cython-command related errors
"""
def __init__(self, *args):
@@ -900,7 +900,7 @@
def lineno(self, frame):
# Take care of the Python and Cython levels. We need to care for both
- # as we can't simply dispath to 'py-step', since that would work for
+ # as we can't simply dispatch to 'py-step', since that would work for
# stepping through Python code, but it would not step back into Cython-
# related code. The C level should be dispatched to the 'step' command.
if self.is_cython_function(frame):
diff -Nru cython-0.26.1/Cython/Debugger/libpython.py cython-0.29.14/Cython/Debugger/libpython.py
--- cython-0.26.1/Cython/Debugger/libpython.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Debugger/libpython.py 2018-11-24 09:20:06.000000000 +0000
@@ -25,9 +25,10 @@
In particular, given a gdb.Value corresponding to a PyObject* in the inferior
process, we can generate a "proxy value" within the gdb process. For example,
given a PyObject* in the inferior process that is in fact a PyListObject*
-holding three PyObject* that turn out to be PyStringObject* instances, we can
-generate a proxy value within the gdb process that is a list of strings:
- ["foo", "bar", "baz"]
+holding three PyObject* that turn out to be PyBytesObject* instances, we can
+generate a proxy value within the gdb process that is a list of bytes
+instances:
+ [b"foo", b"bar", b"baz"]
Doing so can be expensive for complicated graphs of objects, and could take
some time, so we also have a "write_repr" method that writes a representation
@@ -46,70 +47,72 @@
The module also extends gdb with some python-specific commands.
'''
-try:
- input = raw_input
-except NameError:
- pass
+# NOTE: some gdbs are linked with Python 3, so this file should be dual-syntax
+# compatible (2.6+ and 3.0+). See #19308.
+from __future__ import print_function
+import gdb
import os
-import re
-import sys
-import struct
import locale
-import atexit
-import warnings
-import tempfile
-import textwrap
-import itertools
-
-import gdb
+import sys
-try:
- xrange
-except NameError:
+if sys.version_info[0] >= 3:
+ unichr = chr
xrange = range
-
-if sys.version_info[0] < 3:
- # I think this is the only way to fix this bug :'(
- # http://sourceware.org/bugzilla/show_bug.cgi?id=12285
- out, err = sys.stdout, sys.stderr
- reload(sys).setdefaultencoding('UTF-8')
- sys.stdout = out
- sys.stderr = err
+ long = int
# Look up the gdb.Type for some standard types:
-_type_char_ptr = gdb.lookup_type('char').pointer() # char*
-_type_unsigned_char_ptr = gdb.lookup_type('unsigned char').pointer()
-_type_void_ptr = gdb.lookup_type('void').pointer() # void*
+# Those need to be refreshed as types (pointer sizes) may change when
+# gdb loads different executables
-SIZEOF_VOID_P = _type_void_ptr.sizeof
+def _type_char_ptr():
+ return gdb.lookup_type('char').pointer() # char*
-Py_TPFLAGS_HEAPTYPE = (1 << 9)
-Py_TPFLAGS_INT_SUBCLASS = (1 << 23)
+def _type_unsigned_char_ptr():
+ return gdb.lookup_type('unsigned char').pointer() # unsigned char*
+
+
+def _type_unsigned_short_ptr():
+ return gdb.lookup_type('unsigned short').pointer()
+
+
+def _type_unsigned_int_ptr():
+ return gdb.lookup_type('unsigned int').pointer()
+
+
+def _sizeof_void_p():
+ return gdb.lookup_type('void').pointer().sizeof
+
+
+# value computed later, see PyUnicodeObjectPtr.proxy()
+_is_pep393 = None
+
+Py_TPFLAGS_HEAPTYPE = (1 << 9)
Py_TPFLAGS_LONG_SUBCLASS = (1 << 24)
Py_TPFLAGS_LIST_SUBCLASS = (1 << 25)
Py_TPFLAGS_TUPLE_SUBCLASS = (1 << 26)
-Py_TPFLAGS_STRING_SUBCLASS = (1 << 27)
Py_TPFLAGS_BYTES_SUBCLASS = (1 << 27)
Py_TPFLAGS_UNICODE_SUBCLASS = (1 << 28)
Py_TPFLAGS_DICT_SUBCLASS = (1 << 29)
Py_TPFLAGS_BASE_EXC_SUBCLASS = (1 << 30)
Py_TPFLAGS_TYPE_SUBCLASS = (1 << 31)
-MAX_OUTPUT_LEN = 1024
+
+MAX_OUTPUT_LEN=1024
hexdigits = "0123456789abcdef"
ENCODING = locale.getpreferredencoding()
+EVALFRAME = '_PyEval_EvalFrameDefault'
class NullPyObjectPtr(RuntimeError):
pass
def safety_limit(val):
- # Given a integer value from the process being debugged, limit it to some
+ # Given an integer value from the process being debugged, limit it to some
# safety threshold so that arbitrary breakage within said process doesn't
# break the gdb process too much (e.g. sizes of iterations, sizes of lists)
return min(val, 1000)
@@ -118,42 +121,45 @@
def safe_range(val):
# As per range, but don't trust the value too much: cap it to a safety
# threshold in case the data was corrupted
- return range(safety_limit(val))
-
+ return xrange(safety_limit(int(val)))
-def write_unicode(file, text):
- # Write a byte or unicode string to file. Unicode strings are encoded to
- # ENCODING encoding with 'backslashreplace' error handler to avoid
- # UnicodeEncodeError.
- if not isinstance(text, str):
- text = text.encode(ENCODING, 'backslashreplace')
- file.write(text)
-
-
-def os_fsencode(filename):
- if isinstance(filename, str): # only encode in Py2
- return filename
- encoding = sys.getfilesystemencoding()
- if encoding == 'mbcs':
- # mbcs doesn't support surrogateescape
- return filename.encode(encoding)
- encoded = []
- for char in filename:
- # surrogateescape error handler
- if 0xDC80 <= ord(char) <= 0xDCFF:
- byte = chr(ord(char) - 0xDC00)
- else:
- byte = char.encode(encoding)
- encoded.append(byte)
- return ''.join(encoded)
+if sys.version_info[0] >= 3:
+ def write_unicode(file, text):
+ file.write(text)
+else:
+ def write_unicode(file, text):
+ # Write a byte or unicode string to file. Unicode strings are encoded to
+ # ENCODING encoding with 'backslashreplace' error handler to avoid
+ # UnicodeEncodeError.
+ if isinstance(text, unicode):
+ text = text.encode(ENCODING, 'backslashreplace')
+ file.write(text)
+try:
+ os_fsencode = os.fsencode
+except AttributeError:
+ def os_fsencode(filename):
+ if not isinstance(filename, unicode):
+ return filename
+ encoding = sys.getfilesystemencoding()
+ if encoding == 'mbcs':
+ # mbcs doesn't support surrogateescape
+ return filename.encode(encoding)
+ encoded = []
+ for char in filename:
+ # surrogateescape error handler
+ if 0xDC80 <= ord(char) <= 0xDCFF:
+ byte = chr(ord(char) - 0xDC00)
+ else:
+ byte = char.encode(encoding)
+ encoded.append(byte)
+ return ''.join(encoded)
class StringTruncated(RuntimeError):
pass
-
class TruncatedStringIO(object):
- '''Similar to cStringIO, but can truncate the output by raising a
+ '''Similar to io.StringIO, but can truncate the output by raising a
StringTruncated exception'''
def __init__(self, maxlen=None):
self._val = ''
@@ -171,41 +177,10 @@
def getvalue(self):
return self._val
-
-# pretty printer lookup
-all_pretty_typenames = set()
-
-
-class PrettyPrinterTrackerMeta(type):
-
- def __init__(self, name, bases, dict):
- super(PrettyPrinterTrackerMeta, self).__init__(name, bases, dict)
- all_pretty_typenames.add(self._typename)
-
-
-# Class decorator that adds a metaclass and recreates the class with it.
-# Copied from 'six'. See Cython/Utils.py.
-def _add_metaclass(metaclass):
- """Class decorator for creating a class with a metaclass."""
- def wrapper(cls):
- orig_vars = cls.__dict__.copy()
- slots = orig_vars.get('__slots__')
- if slots is not None:
- if isinstance(slots, str):
- slots = [slots]
- for slots_var in slots:
- orig_vars.pop(slots_var)
- orig_vars.pop('__dict__', None)
- orig_vars.pop('__weakref__', None)
- return metaclass(cls.__name__, cls.__bases__, orig_vars)
- return wrapper
-
-
-@_add_metaclass(PrettyPrinterTrackerMeta)
class PyObjectPtr(object):
"""
- Class wrapping a gdb.Value that's a either a (PyObject*) within the
- inferior process, or some subclass pointer e.g. (PyStringObject*)
+ Class wrapping a gdb.Value that's either a (PyObject*) within the
+ inferior process, or some subclass pointer e.g. (PyBytesObject*)
There will be a subclass for every refined PyObject type that we care
about.
@@ -213,7 +188,6 @@
Note that at every stage the underlying pointer could be NULL, point
to corrupt data, etc; this is the debugger, after all.
"""
-
_typename = 'PyObject'
def __init__(self, gdbval, cast_to=None):
@@ -286,7 +260,7 @@
return PyTypeObjectPtr(self.field('ob_type'))
def is_null(self):
- return not self._gdbval
+ return 0 == long(self._gdbval)
def is_optimized_out(self):
'''
@@ -347,7 +321,7 @@
return '<%s at remote 0x%x>' % (self.tp_name, self.address)
return FakeRepr(self.safe_tp_name(),
- int(self._gdbval))
+ long(self._gdbval))
def write_repr(self, out, visited):
'''
@@ -386,44 +360,40 @@
# class
return cls
- #print 'tp_flags = 0x%08x' % tp_flags
- #print 'tp_name = %r' % tp_name
+ #print('tp_flags = 0x%08x' % tp_flags)
+ #print('tp_name = %r' % tp_name)
name_map = {'bool': PyBoolObjectPtr,
'classobj': PyClassObjectPtr,
- 'instance': PyInstanceObjectPtr,
'NoneType': PyNoneStructPtr,
'frame': PyFrameObjectPtr,
'set' : PySetObjectPtr,
'frozenset' : PySetObjectPtr,
'builtin_function_or_method' : PyCFunctionObjectPtr,
+ 'method-wrapper': wrapperobject,
}
if tp_name in name_map:
return name_map[tp_name]
- if tp_flags & (Py_TPFLAGS_HEAPTYPE|Py_TPFLAGS_TYPE_SUBCLASS):
- return PyTypeObjectPtr
+ if tp_flags & Py_TPFLAGS_HEAPTYPE:
+ return HeapTypeObjectPtr
- if tp_flags & Py_TPFLAGS_INT_SUBCLASS:
- return PyIntObjectPtr
if tp_flags & Py_TPFLAGS_LONG_SUBCLASS:
return PyLongObjectPtr
if tp_flags & Py_TPFLAGS_LIST_SUBCLASS:
return PyListObjectPtr
if tp_flags & Py_TPFLAGS_TUPLE_SUBCLASS:
return PyTupleObjectPtr
- if tp_flags & Py_TPFLAGS_STRING_SUBCLASS:
- try:
- gdb.lookup_type('PyBytesObject')
- return PyBytesObjectPtr
- except RuntimeError:
- return PyStringObjectPtr
+ if tp_flags & Py_TPFLAGS_BYTES_SUBCLASS:
+ return PyBytesObjectPtr
if tp_flags & Py_TPFLAGS_UNICODE_SUBCLASS:
return PyUnicodeObjectPtr
if tp_flags & Py_TPFLAGS_DICT_SUBCLASS:
return PyDictObjectPtr
if tp_flags & Py_TPFLAGS_BASE_EXC_SUBCLASS:
return PyBaseExceptionObjectPtr
+ #if tp_flags & Py_TPFLAGS_TYPE_SUBCLASS:
+ # return PyTypeObjectPtr
# Use the base class:
return cls
@@ -438,7 +408,7 @@
p = PyObjectPtr(gdbval)
cls = cls.subclass_from_type(p.type())
return cls(gdbval, cast_to=cls.get_gdb_type())
- except RuntimeError as exc:
+ except RuntimeError:
# Handle any kind of error e.g. NULL ptrs by simply using the base
# class
pass
@@ -449,13 +419,11 @@
return gdb.lookup_type(cls._typename).pointer()
def as_address(self):
- return int(self._gdbval)
-
+ return long(self._gdbval)
class PyVarObjectPtr(PyObjectPtr):
_typename = 'PyVarObject'
-
class ProxyAlreadyVisited(object):
'''
Placeholder proxy to use when protecting against infinite recursion due to
@@ -471,7 +439,7 @@
def _write_instance_repr(out, visited, name, pyop_attrdict, address):
- '''Shared code for use by old-style and new-style classes:
+ '''Shared code for use by all classes:
write a representation to file-like object "out"'''
out.write('<')
out.write(name)
@@ -480,7 +448,7 @@
if isinstance(pyop_attrdict, PyDictObjectPtr):
out.write('(')
first = True
- for pyop_arg, pyop_val in pyop_attrdict.items():
+ for pyop_arg, pyop_val in pyop_attrdict.iteritems():
if not first:
out.write(', ')
first = False
@@ -500,24 +468,27 @@
def __repr__(self):
if isinstance(self.attrdict, dict):
- kwargs = ', '.join("%s=%r" % (arg, val) for arg, val in self.attrdict.items())
- return '<%s(%s) at remote 0x%x>' % (
- self.cl_name, kwargs, self.address)
+ kwargs = ', '.join(["%s=%r" % (arg, val)
+ for arg, val in self.attrdict.iteritems()])
+ return '<%s(%s) at remote 0x%x>' % (self.cl_name,
+ kwargs, self.address)
else:
- return '<%s at remote 0x%x>' % (
- self.cl_name, self.address)
-
+ return '<%s at remote 0x%x>' % (self.cl_name,
+ self.address)
def _PyObject_VAR_SIZE(typeobj, nitems):
+ if _PyObject_VAR_SIZE._type_size_t is None:
+ _PyObject_VAR_SIZE._type_size_t = gdb.lookup_type('size_t')
+
return ( ( typeobj.field('tp_basicsize') +
nitems * typeobj.field('tp_itemsize') +
- (SIZEOF_VOID_P - 1)
- ) & ~(SIZEOF_VOID_P - 1)
- ).cast(gdb.lookup_type('size_t'))
+ (_sizeof_void_p() - 1)
+ ) & ~(_sizeof_void_p() - 1)
+ ).cast(_PyObject_VAR_SIZE._type_size_t)
+_PyObject_VAR_SIZE._type_size_t = None
-
-class PyTypeObjectPtr(PyObjectPtr):
- _typename = 'PyTypeObject'
+class HeapTypeObjectPtr(PyObjectPtr):
+ _typename = 'PyObject'
def get_attr_dict(self):
'''
@@ -536,9 +507,9 @@
size = _PyObject_VAR_SIZE(typeobj, tsize)
dictoffset += size
assert dictoffset > 0
- assert dictoffset % SIZEOF_VOID_P == 0
+ assert dictoffset % _sizeof_void_p() == 0
- dictptr = self._gdbval.cast(_type_char_ptr) + dictoffset
+ dictptr = self._gdbval.cast(_type_char_ptr()) + dictoffset
PyObjectPtrPtr = PyObjectPtr.get_gdb_type().pointer()
dictptr = dictptr.cast(PyObjectPtrPtr)
return PyObjectPtr.from_pyobject_ptr(dictptr.dereference())
@@ -551,7 +522,7 @@
def proxyval(self, visited):
'''
- Support for new-style classes.
+ Support for classes.
Currently we just locate the dictionary using a transliteration to
python of _PyObject_GetDictPtr, ignoring descriptors
@@ -568,8 +539,8 @@
attr_dict = {}
tp_name = self.safe_tp_name()
- # New-style class:
- return InstanceProxy(tp_name, attr_dict, int(self._gdbval))
+ # Class:
+ return InstanceProxy(tp_name, attr_dict, long(self._gdbval))
def write_repr(self, out, visited):
# Guard against infinite loops:
@@ -578,16 +549,9 @@
return
visited.add(self.as_address())
- try:
- tp_name = self.field('tp_name').string()
- except RuntimeError:
- tp_name = 'unknown'
-
- out.write('' % (tp_name, self.as_address()))
- # pyop_attrdict = self.get_attr_dict()
- # _write_instance_repr(out, visited,
- # self.safe_tp_name(), pyop_attrdict, self.as_address())
-
+ pyop_attrdict = self.get_attr_dict()
+ _write_instance_repr(out, visited,
+ self.safe_tp_name(), pyop_attrdict, self.as_address())
class ProxyException(Exception):
def __init__(self, tp_name, args):
@@ -597,7 +561,6 @@
def __repr__(self):
return '%s%r' % (self.tp_name, self.args)
-
class PyBaseExceptionObjectPtr(PyObjectPtr):
"""
Class wrapping a gdb.Value that's a PyBaseExceptionObject* i.e. an exception
@@ -624,7 +587,6 @@
out.write(self.safe_tp_name())
self.write_field_repr('args', out, visited)
-
class PyClassObjectPtr(PyObjectPtr):
"""
Class wrapping a gdb.Value that's a PyClassObject* i.e. a
@@ -640,17 +602,17 @@
def __repr__(self):
return "" % self.ml_name
-
class BuiltInMethodProxy(object):
def __init__(self, ml_name, pyop_m_self):
self.ml_name = ml_name
self.pyop_m_self = pyop_m_self
def __repr__(self):
- return '' % (
- self.ml_name, self.pyop_m_self.safe_tp_name(),
- self.pyop_m_self.as_address())
-
+ return (''
+ % (self.ml_name,
+ self.pyop_m_self.safe_tp_name(),
+ self.pyop_m_self.as_address())
+ )
class PyCFunctionObjectPtr(PyObjectPtr):
"""
@@ -709,17 +671,21 @@
def iteritems(self):
'''
Yields a sequence of (PyObjectPtr key, PyObjectPtr value) pairs,
- analagous to dict.items()
+ analogous to dict.iteritems()
'''
- for i in safe_range(self.field('ma_mask') + 1):
- ep = self.field('ma_table') + i
- pyop_value = PyObjectPtr.from_pyobject_ptr(ep['me_value'])
+ keys = self.field('ma_keys')
+ values = self.field('ma_values')
+ entries, nentries = self._get_entries(keys)
+ for i in safe_range(nentries):
+ ep = entries[i]
+ if long(values):
+ pyop_value = PyObjectPtr.from_pyobject_ptr(values[i])
+ else:
+ pyop_value = PyObjectPtr.from_pyobject_ptr(ep['me_value'])
if not pyop_value.is_null():
pyop_key = PyObjectPtr.from_pyobject_ptr(ep['me_key'])
yield (pyop_key, pyop_value)
- items = iteritems
-
def proxyval(self, visited):
# Guard against infinite loops:
if self.as_address() in visited:
@@ -727,7 +693,7 @@
visited.add(self.as_address())
result = {}
- for pyop_key, pyop_value in self.items():
+ for pyop_key, pyop_value in self.iteritems():
proxy_key = pyop_key.proxyval(visited)
proxy_value = pyop_value.proxyval(visited)
result[proxy_key] = proxy_value
@@ -742,7 +708,7 @@
out.write('{')
first = True
- for pyop_key, pyop_value in self.items():
+ for pyop_key, pyop_value in self.iteritems():
if not first:
out.write(', ')
first = False
@@ -751,52 +717,31 @@
pyop_value.write_repr(out, visited)
out.write('}')
+ def _get_entries(self, keys):
+ dk_nentries = int(keys['dk_nentries'])
+ dk_size = int(keys['dk_size'])
+ try:
+ # <= Python 3.5
+ return keys['dk_entries'], dk_size
+ except RuntimeError:
+ # >= Python 3.6
+ pass
-class PyInstanceObjectPtr(PyObjectPtr):
- _typename = 'PyInstanceObject'
-
- def proxyval(self, visited):
- # Guard against infinite loops:
- if self.as_address() in visited:
- return ProxyAlreadyVisited('<...>')
- visited.add(self.as_address())
-
- # Get name of class:
- in_class = self.pyop_field('in_class')
- cl_name = in_class.pyop_field('cl_name').proxyval(visited)
-
- # Get dictionary of instance attributes:
- in_dict = self.pyop_field('in_dict').proxyval(visited)
-
- # Old-style class:
- return InstanceProxy(cl_name, in_dict, int(self._gdbval))
-
- def write_repr(self, out, visited):
- # Guard against infinite loops:
- if self.as_address() in visited:
- out.write('<...>')
- return
- visited.add(self.as_address())
-
- # Old-style class:
-
- # Get name of class:
- in_class = self.pyop_field('in_class')
- cl_name = in_class.pyop_field('cl_name').proxyval(visited)
-
- # Get dictionary of instance attributes:
- pyop_in_dict = self.pyop_field('in_dict')
-
- _write_instance_repr(out, visited,
- cl_name, pyop_in_dict, self.as_address())
-
-
-class PyIntObjectPtr(PyObjectPtr):
- _typename = 'PyIntObject'
+ if dk_size <= 0xFF:
+ offset = dk_size
+ elif dk_size <= 0xFFFF:
+ offset = 2 * dk_size
+ elif dk_size <= 0xFFFFFFFF:
+ offset = 4 * dk_size
+ else:
+ offset = 8 * dk_size
+
+ ent_addr = keys['dk_indices']['as_1'].address
+ ent_addr = ent_addr.cast(_type_unsigned_char_ptr()) + offset
+ ent_ptr_t = gdb.lookup_type('PyDictKeyEntry').pointer()
+ ent_addr = ent_addr.cast(ent_ptr_t)
- def proxyval(self, visited):
- result = int_from_int(self.field('ob_ival'))
- return result
+ return ent_addr, dk_nentries
class PyListObjectPtr(PyObjectPtr):
@@ -832,7 +777,6 @@
element.write_repr(out, visited)
out.write(']')
-
class PyLongObjectPtr(PyObjectPtr):
_typename = 'PyLongObject'
@@ -854,9 +798,9 @@
#define PyLong_SHIFT 30
#define PyLong_SHIFT 15
'''
- ob_size = int(self.field('ob_size'))
+ ob_size = long(self.field('ob_size'))
if ob_size == 0:
- return int(0)
+ return 0
ob_digit = self.field('ob_digit')
@@ -865,7 +809,7 @@
else:
SHIFT = 30
- digits = [ob_digit[i] * (1 << (SHIFT*i))
+ digits = [long(ob_digit[i]) * 2**(SHIFT*i)
for i in safe_range(abs(ob_size))]
result = sum(digits)
if ob_size < 0:
@@ -883,13 +827,11 @@
Class wrapping a gdb.Value that's a PyBoolObject* i.e. one of the two
instances (Py_True/Py_False) within the process being debugged.
"""
- _typename = 'PyBoolObject'
-
def proxyval(self, visited):
- castto = gdb.lookup_type('PyLongObject').pointer()
- self._gdbval = self._gdbval.cast(castto)
- return bool(PyLongObjectPtr(self._gdbval).proxyval(visited))
-
+ if PyLongObjectPtr.proxyval(self, visited):
+ return True
+ else:
+ return False
class PyNoneStructPtr(PyObjectPtr):
"""
@@ -939,10 +881,10 @@
the global variables of this frame
'''
if self.is_optimized_out():
- return
+ return ()
pyop_globals = self.pyop_field('f_globals')
- return iter(pyop_globals.items())
+ return pyop_globals.iteritems()
def iter_builtins(self):
'''
@@ -950,10 +892,10 @@
the builtin variables
'''
if self.is_optimized_out():
- return
+ return ()
pyop_builtins = self.pyop_field('f_builtins')
- return iter(pyop_builtins.items())
+ return pyop_builtins.iteritems()
def get_var_by_name(self, name):
'''
@@ -989,7 +931,7 @@
if self.is_optimized_out():
return None
f_trace = self.field('f_trace')
- if f_trace:
+ if long(f_trace) != 0:
# we have a non-NULL f_trace:
return self.f_lineno
else:
@@ -1004,7 +946,11 @@
if self.is_optimized_out():
return '(frame information optimized out)'
filename = self.filename()
- with open(os_fsencode(filename), 'r') as f:
+ try:
+ f = open(os_fsencode(filename), 'r')
+ except IOError:
+ return None
+ with f:
all_lines = f.readlines()
# Convert from 1-based current_line_num to 0-based list offset:
return all_lines[self.current_line_num()-1]
@@ -1030,25 +976,39 @@
out.write(')')
+ def print_traceback(self):
+ if self.is_optimized_out():
+ sys.stdout.write(' (frame information optimized out)\n')
+ return
+ visited = set()
+ sys.stdout.write(' File "%s", line %i, in %s\n'
+ % (self.co_filename.proxyval(visited),
+ self.current_line_num(),
+ self.co_name.proxyval(visited)))
class PySetObjectPtr(PyObjectPtr):
_typename = 'PySetObject'
+ @classmethod
+ def _dummy_key(self):
+ return gdb.lookup_global_symbol('_PySet_Dummy').value()
+
+ def __iter__(self):
+ dummy_ptr = self._dummy_key()
+ table = self.field('table')
+ for i in safe_range(self.field('mask') + 1):
+ setentry = table[i]
+ key = setentry['key']
+ if key != 0 and key != dummy_ptr:
+ yield PyObjectPtr.from_pyobject_ptr(key)
+
def proxyval(self, visited):
# Guard against infinite loops:
if self.as_address() in visited:
return ProxyAlreadyVisited('%s(...)' % self.safe_tp_name())
visited.add(self.as_address())
- members = []
- table = self.field('table')
- for i in safe_range(self.field('mask')+1):
- setentry = table[i]
- key = setentry['key']
- if key != 0:
- key_proxy = PyObjectPtr.from_pyobject_ptr(key).proxyval(visited)
- if key_proxy != '':
- members.append(key_proxy)
+ members = (key.proxyval(visited) for key in self)
if self.safe_tp_name() == 'frozenset':
return frozenset(members)
else:
@@ -1077,18 +1037,11 @@
out.write('{')
first = True
- table = self.field('table')
- for i in safe_range(self.field('mask')+1):
- setentry = table[i]
- key = setentry['key']
- if key != 0:
- pyop_key = PyObjectPtr.from_pyobject_ptr(key)
- key_proxy = pyop_key.proxyval(visited) # FIXME!
- if key_proxy != '':
- if not first:
- out.write(', ')
- first = False
- pyop_key.write_repr(out, visited)
+ for key in self:
+ if not first:
+ out.write(', ')
+ first = False
+ key.write_repr(out, visited)
out.write('}')
if tp_name != 'set':
@@ -1101,13 +1054,13 @@
def __str__(self):
field_ob_size = self.field('ob_size')
field_ob_sval = self.field('ob_sval')
- return ''.join(struct.pack('b', field_ob_sval[i])
- for i in safe_range(field_ob_size))
+ char_ptr = field_ob_sval.address.cast(_type_unsigned_char_ptr())
+ return ''.join([chr(char_ptr[i]) for i in safe_range(field_ob_size)])
def proxyval(self, visited):
return str(self)
- def write_repr(self, out, visited, py3=True):
+ def write_repr(self, out, visited):
# Write this out as a Python 3 bytes literal, i.e. with a "b" prefix
# Get a PyStringObject* within the Python 2 gdb process:
@@ -1118,10 +1071,7 @@
quote = "'"
if "'" in proxy and not '"' in proxy:
quote = '"'
-
- if py3:
- out.write('b')
-
+ out.write('b')
out.write(quote)
for byte in proxy:
if byte == quote or byte == '\\':
@@ -1145,9 +1095,6 @@
class PyStringObjectPtr(PyBytesObjectPtr):
_typename = 'PyStringObject'
- def write_repr(self, out, visited):
- return super(PyStringObjectPtr, self).write_repr(out, visited, py3=False)
-
class PyTupleObjectPtr(PyObjectPtr):
_typename = 'PyTupleObject'
@@ -1163,8 +1110,8 @@
return ProxyAlreadyVisited('(...)')
visited.add(self.as_address())
- result = tuple([PyObjectPtr.from_pyobject_ptr(self[i]).proxyval(visited)
- for i in safe_range(int_from_int(self.field('ob_size')))])
+ result = tuple(PyObjectPtr.from_pyobject_ptr(self[i]).proxyval(visited)
+ for i in safe_range(int_from_int(self.field('ob_size'))))
return result
def write_repr(self, out, visited):
@@ -1185,6 +1132,9 @@
else:
out.write(')')
+class PyTypeObjectPtr(PyObjectPtr):
+ _typename = 'PyTypeObject'
+
def _unichr_is_printable(char):
# Logic adapted from Python 3's Tools/unicode/makeunicodedata.py
@@ -1193,12 +1143,8 @@
import unicodedata
return unicodedata.category(char) not in ("C", "Z")
-
if sys.maxunicode >= 0x10000:
- try:
- _unichr = unichr
- except NameError:
- _unichr = chr
+ _unichr = unichr
else:
# Needed for proper surrogate support if sizeof(Py_UNICODE) is 2 in gdb
def _unichr(x):
@@ -1218,15 +1164,46 @@
return _type_Py_UNICODE.sizeof
def proxyval(self, visited):
- # From unicodeobject.h:
- # Py_ssize_t length; /* Length of raw Unicode data in buffer */
- # Py_UNICODE *str; /* Raw Unicode buffer */
- field_length = int(self.field('length'))
- field_str = self.field('str')
+ global _is_pep393
+ if _is_pep393 is None:
+ fields = gdb.lookup_type('PyUnicodeObject').target().fields()
+ _is_pep393 = 'data' in [f.name for f in fields]
+ if _is_pep393:
+ # Python 3.3 and newer
+ may_have_surrogates = False
+ compact = self.field('_base')
+ ascii = compact['_base']
+ state = ascii['state']
+ is_compact_ascii = (int(state['ascii']) and int(state['compact']))
+ if not int(state['ready']):
+ # string is not ready
+ field_length = long(compact['wstr_length'])
+ may_have_surrogates = True
+ field_str = ascii['wstr']
+ else:
+ field_length = long(ascii['length'])
+ if is_compact_ascii:
+ field_str = ascii.address + 1
+ elif int(state['compact']):
+ field_str = compact.address + 1
+ else:
+ field_str = self.field('data')['any']
+ repr_kind = int(state['kind'])
+ if repr_kind == 1:
+ field_str = field_str.cast(_type_unsigned_char_ptr())
+ elif repr_kind == 2:
+ field_str = field_str.cast(_type_unsigned_short_ptr())
+ elif repr_kind == 4:
+ field_str = field_str.cast(_type_unsigned_int_ptr())
+ else:
+ # Python 3.2 and earlier
+ field_length = long(self.field('length'))
+ field_str = self.field('str')
+ may_have_surrogates = self.char_width() == 2
# Gather a list of ints from the Py_UNICODE array; these are either
- # UCS-2 or UCS-4 code points:
- if self.char_width() > 2:
+ # UCS-1, UCS-2 or UCS-4 code points:
+ if not may_have_surrogates:
Py_UNICODEs = [int(field_str[i]) for i in safe_range(field_length)]
else:
# A more elaborate routine if sizeof(Py_UNICODE) is 2 in the
@@ -1253,24 +1230,19 @@
# Convert the int code points to unicode characters, and generate a
# local unicode instance.
# This splits surrogate pairs if sizeof(Py_UNICODE) is 2 here (in gdb).
- result = u''.join([_unichr(ucs) for ucs in Py_UNICODEs])
+ result = u''.join([
+ (_unichr(ucs) if ucs <= 0x10ffff else '\ufffd')
+ for ucs in Py_UNICODEs])
return result
def write_repr(self, out, visited):
+ # Write this out as a Python 3 str literal, i.e. without a "u" prefix
+
# Get a PyUnicodeObject* within the Python 2 gdb process:
proxy = self.proxyval(visited)
# Transliteration of Python 3's Object/unicodeobject.c:unicode_repr
# to Python 2:
- try:
- gdb.parse_and_eval('PyString_Type')
- except RuntimeError:
- # Python 3, don't write 'u' as prefix
- pass
- else:
- # Python 2, write the 'u'
- out.write('u')
-
if "'" in proxy and '"' not in proxy:
quote = '"'
else:
@@ -1371,16 +1343,40 @@
out.write(quote)
- def __unicode__(self):
- return self.proxyval(set())
- def __str__(self):
- # In Python 3, everything is unicode (including attributes of e.g.
- # code objects, such as function names). The Python 2 debugger code
- # uses PyUnicodePtr objects to format strings etc, whereas with a
- # Python 2 debuggee we'd get PyStringObjectPtr instances with __str__.
- # Be compatible with that.
- return unicode(self).encode('UTF-8')
+class wrapperobject(PyObjectPtr):
+ _typename = 'wrapperobject'
+
+ def safe_name(self):
+ try:
+ name = self.field('descr')['d_base']['name'].string()
+ return repr(name)
+ except (NullPyObjectPtr, RuntimeError):
+ return ''
+
+ def safe_tp_name(self):
+ try:
+ return self.field('self')['ob_type']['tp_name'].string()
+ except (NullPyObjectPtr, RuntimeError):
+ return ''
+
+ def safe_self_addresss(self):
+ try:
+ address = long(self.field('self'))
+ return '%#x' % address
+ except (NullPyObjectPtr, RuntimeError):
+ return ''
+
+ def proxyval(self, visited):
+ name = self.safe_name()
+ tp_name = self.safe_tp_name()
+ self_address = self.safe_self_addresss()
+ return (""
+ % (name, tp_name, self_address))
+
+ def write_repr(self, out, visited):
+ proxy = self.proxyval(visited)
+ out.write(proxy)
def int_from_int(gdbval):
@@ -1413,13 +1409,15 @@
proxyval = pyop.proxyval(set())
return stringify(proxyval)
-
def pretty_printer_lookup(gdbval):
type = gdbval.type.unqualified()
- if type.code == gdb.TYPE_CODE_PTR:
- type = type.target().unqualified()
- if str(type) in all_pretty_typenames:
- return PyObjectPtrPrinter(gdbval)
+ if type.code != gdb.TYPE_CODE_PTR:
+ return None
+
+ type = type.target().unqualified()
+ t = str(type)
+ if t in ("PyObject", "PyFrameObject", "PyUnicodeObject", "wrapperobject"):
+ return PyObjectPtrPrinter(gdbval)
"""
During development, I've been manually invoking the code in this way:
@@ -1440,22 +1438,21 @@
/usr/lib/libpython2.6.so.1.0-gdb.py
/usr/lib/debug/usr/lib/libpython2.6.so.1.0.debug-gdb.py
"""
-
-
-def register(obj):
+def register (obj):
if obj is None:
obj = gdb
# Wire up the pretty-printer
obj.pretty_printers.append(pretty_printer_lookup)
-register(gdb.current_objfile())
+register (gdb.current_objfile ())
+
+
# Unfortunately, the exact API exposed by the gdb module varies somewhat
# from build to build
# See http://bugs.python.org/issue8279?#msg102276
-
class Frame(object):
'''
Wrapper for gdb.Frame, adding various methods
@@ -1500,9 +1497,26 @@
iter_frame = iter_frame.newer()
return index
- def is_evalframeex(self):
- '''Is this a PyEval_EvalFrameEx frame?'''
- if self._gdbframe.name() == 'PyEval_EvalFrameEx':
+ # We divide frames into:
+ # - "python frames":
+ # - "bytecode frames" i.e. PyEval_EvalFrameEx
+ # - "other python frames": things that are of interest from a python
+ # POV, but aren't bytecode (e.g. GC, GIL)
+ # - everything else
+
+ def is_python_frame(self):
+ '''Is this a _PyEval_EvalFrameDefault frame, or some other important
+ frame? (see is_other_python_frame for what "important" means in this
+ context)'''
+ if self.is_evalframe():
+ return True
+ if self.is_other_python_frame():
+ return True
+ return False
+
+ def is_evalframe(self):
+ '''Is this a _PyEval_EvalFrameDefault frame?'''
+ if self._gdbframe.name() == EVALFRAME:
'''
I believe we also need to filter on the inline
struct frame_id.inline_depth, only regarding frames with
@@ -1511,44 +1525,87 @@
So we reject those with type gdb.INLINE_FRAME
'''
if self._gdbframe.type() == gdb.NORMAL_FRAME:
- # We have a PyEval_EvalFrameEx frame:
+ # We have a _PyEval_EvalFrameDefault frame:
return True
return False
- def read_var(self, varname):
- """
- read_var with respect to code blocks (gdbframe.read_var works with
- respect to the most recent block)
+ def is_other_python_frame(self):
+ '''Is this frame worth displaying in python backtraces?
+ Examples:
+ - waiting on the GIL
+ - garbage-collecting
+ - within a CFunction
+ If it is, return a descriptive string
+ For other frames, return False
+ '''
+ if self.is_waiting_for_gil():
+ return 'Waiting for the GIL'
+
+ if self.is_gc_collect():
+ return 'Garbage-collecting'
+
+ # Detect invocations of PyCFunction instances:
+ frame = self._gdbframe
+ caller = frame.name()
+ if not caller:
+ return False
- Apparently this function doesn't work, though, as it seems to read
- variables in other frames also sometimes.
- """
- block = self._gdbframe.block()
- var = None
+ if caller in ('_PyCFunction_FastCallDict',
+ '_PyCFunction_FastCallKeywords'):
+ arg_name = 'func'
+ # Within that frame:
+ # "func" is the local containing the PyObject* of the
+ # PyCFunctionObject instance
+ # "f" is the same value, but cast to (PyCFunctionObject*)
+ # "self" is the (PyObject*) of the 'self'
+ try:
+ # Use the prettyprinter for the func:
+ func = frame.read_var(arg_name)
+ return str(func)
+ except RuntimeError:
+ return 'PyCFunction invocation (unable to read %s)' % arg_name
- while block and var is None:
+ if caller == 'wrapper_call':
try:
- var = self._gdbframe.read_var(varname, block)
- except ValueError:
- pass
+ func = frame.read_var('wp')
+ return str(func)
+ except RuntimeError:
+ return ''
- block = block.superblock
+ # This frame isn't worth reporting:
+ return False
- return var
+ def is_waiting_for_gil(self):
+ '''Is this frame waiting on the GIL?'''
+ # This assumes the _POSIX_THREADS version of Python/ceval_gil.h:
+ name = self._gdbframe.name()
+ if name:
+ return 'pthread_cond_timedwait' in name
+
+ def is_gc_collect(self):
+ '''Is this frame "collect" within the garbage-collector?'''
+ return self._gdbframe.name() == 'collect'
def get_pyop(self):
try:
- # self.read_var does not always work properly, so select our frame
- # and restore the previously selected frame
- selected_frame = gdb.selected_frame()
- self._gdbframe.select()
- f = gdb.parse_and_eval('f')
- selected_frame.select()
- except RuntimeError:
+ f = self._gdbframe.read_var('f')
+ frame = PyFrameObjectPtr.from_pyobject_ptr(f)
+ if not frame.is_optimized_out():
+ return frame
+ # gdb is unable to get the "f" argument of PyEval_EvalFrameEx()
+ # because it was "optimized out". Try to get "f" from the frame
+ # of the caller, PyEval_EvalCodeEx().
+ orig_frame = frame
+ caller = self._gdbframe.older()
+ if caller:
+ f = caller.read_var('f')
+ frame = PyFrameObjectPtr.from_pyobject_ptr(f)
+ if not frame.is_optimized_out():
+ return frame
+ return orig_frame
+ except ValueError:
return None
- else:
- return PyFrameObjectPtr.from_pyobject_ptr(f)
@classmethod
def get_selected_frame(cls):
@@ -1559,12 +1616,30 @@
@classmethod
def get_selected_python_frame(cls):
- '''Try to obtain the Frame for the python code in the selected frame,
- or None'''
+ '''Try to obtain the Frame for the python-related code in the selected
+ frame, or None'''
+ try:
+ frame = cls.get_selected_frame()
+ except gdb.error:
+ # No frame: Python didn't start yet
+ return None
+
+ while frame:
+ if frame.is_python_frame():
+ return frame
+ frame = frame.older()
+
+ # Not found:
+ return None
+
+ @classmethod
+ def get_selected_bytecode_frame(cls):
+ '''Try to obtain the Frame for the python bytecode interpreter in the
+ selected GDB frame, or None'''
frame = cls.get_selected_frame()
while frame:
- if frame.is_evalframeex():
+ if frame.is_evalframe():
return frame
frame = frame.older()
@@ -1572,17 +1647,41 @@
return None
def print_summary(self):
- if self.is_evalframeex():
+ if self.is_evalframe():
pyop = self.get_pyop()
if pyop:
line = pyop.get_truncated_repr(MAX_OUTPUT_LEN)
write_unicode(sys.stdout, '#%i %s\n' % (self.get_index(), line))
- sys.stdout.write(pyop.current_line())
+ if not pyop.is_optimized_out():
+ line = pyop.current_line()
+ if line is not None:
+ sys.stdout.write(' %s\n' % line.strip())
else:
sys.stdout.write('#%i (unable to read python frame information)\n' % self.get_index())
else:
- sys.stdout.write('#%i\n' % self.get_index())
+ info = self.is_other_python_frame()
+ if info:
+ sys.stdout.write('#%i %s\n' % (self.get_index(), info))
+ else:
+ sys.stdout.write('#%i\n' % self.get_index())
+ def print_traceback(self):
+ if self.is_evalframe():
+ pyop = self.get_pyop()
+ if pyop:
+ pyop.print_traceback()
+ if not pyop.is_optimized_out():
+ line = pyop.current_line()
+ if line is not None:
+ sys.stdout.write(' %s\n' % line.strip())
+ else:
+ sys.stdout.write(' (unable to read python frame information)\n')
+ else:
+ info = self.is_other_python_frame()
+ if info:
+ sys.stdout.write(' %s\n' % info)
+ else:
+ sys.stdout.write(' (not a python frame)\n')
class PyList(gdb.Command):
'''List the current Python source code, if any
@@ -1602,6 +1701,7 @@
gdb.COMMAND_FILES,
gdb.COMPLETE_NONE)
+
def invoke(self, args, from_tty):
import re
@@ -1617,13 +1717,14 @@
if m:
start, end = map(int, m.groups())
- frame = Frame.get_selected_python_frame()
+ # py-list requires an actual PyEval_EvalFrameEx frame:
+ frame = Frame.get_selected_bytecode_frame()
if not frame:
- print('Unable to locate python frame')
+ print('Unable to locate gdb frame for python bytecode interpreter')
return
pyop = frame.get_pyop()
- if not pyop:
+ if not pyop or pyop.is_optimized_out():
print('Unable to read information on python frame')
return
@@ -1637,7 +1738,13 @@
if start<1:
start = 1
- with open(os_fsencode(filename), 'r') as f:
+ try:
+ f = open(os_fsencode(filename), 'r')
+ except IOError as err:
+ sys.stdout.write('Unable to open %s: %s\n'
+ % (filename, err))
+ return
+ with f:
all_lines = f.readlines()
# start and end are 1-based, all_lines is 0-based;
# so [start-1:end] as a python slice gives us [start, end] as a
@@ -1649,13 +1756,17 @@
linestr = '>' + linestr
sys.stdout.write('%4s %s' % (linestr, line))
+
# ...and register the command:
PyList()
-
def move_in_stack(move_up):
'''Move up or down the stack (for the py-up/py-down command)'''
frame = Frame.get_selected_python_frame()
+ if not frame:
+ print('Unable to locate python frame')
+ return
+
while frame:
if move_up:
iter_frame = frame.older()
@@ -1665,7 +1776,7 @@
if not iter_frame:
break
- if iter_frame.is_evalframeex():
+ if iter_frame.is_python_frame():
# Result:
if iter_frame.select():
iter_frame.print_summary()
@@ -1678,7 +1789,6 @@
else:
print('Unable to find a newer python frame')
-
class PyUp(gdb.Command):
'Select and print the python stack frame that called this one (if any)'
def __init__(self):
@@ -1687,10 +1797,10 @@
gdb.COMMAND_STACK,
gdb.COMPLETE_NONE)
+
def invoke(self, args, from_tty):
move_in_stack(move_up=True)
-
class PyDown(gdb.Command):
'Select and print the python stack frame called by this one (if any)'
def __init__(self):
@@ -1699,15 +1809,36 @@
gdb.COMMAND_STACK,
gdb.COMPLETE_NONE)
+
def invoke(self, args, from_tty):
move_in_stack(move_up=False)
-
# Not all builds of gdb have gdb.Frame.select
if hasattr(gdb.Frame, 'select'):
PyUp()
PyDown()
+class PyBacktraceFull(gdb.Command):
+ 'Display the current python frame and all the frames within its call stack (if any)'
+ def __init__(self):
+ gdb.Command.__init__ (self,
+ "py-bt-full",
+ gdb.COMMAND_STACK,
+ gdb.COMPLETE_NONE)
+
+
+ def invoke(self, args, from_tty):
+ frame = Frame.get_selected_python_frame()
+ if not frame:
+ print('Unable to locate python frame')
+ return
+
+ while frame:
+ if frame.is_python_frame():
+ frame.print_summary()
+ frame = frame.older()
+
+PyBacktraceFull()
class PyBacktrace(gdb.Command):
'Display the current python frame and all the frames within its call stack (if any)'
@@ -1720,14 +1851,18 @@
def invoke(self, args, from_tty):
frame = Frame.get_selected_python_frame()
+ if not frame:
+ print('Unable to locate python frame')
+ return
+
+ sys.stdout.write('Traceback (most recent call first):\n')
while frame:
- if frame.is_evalframeex():
- frame.print_summary()
+ if frame.is_python_frame():
+ frame.print_traceback()
frame = frame.older()
PyBacktrace()
-
class PyPrint(gdb.Command):
'Look up the given python variable name, and print it'
def __init__(self):
@@ -1736,6 +1871,7 @@
gdb.COMMAND_DATA,
gdb.COMPLETE_NONE)
+
def invoke(self, args, from_tty):
name = str(args)
@@ -1752,16 +1888,23 @@
pyop_var, scope = pyop_frame.get_var_by_name(name)
if pyop_var:
- print('%s %r = %s' % (
- scope, name, pyop_var.get_truncated_repr(MAX_OUTPUT_LEN)))
+ print('%s %r = %s'
+ % (scope,
+ name,
+ pyop_var.get_truncated_repr(MAX_OUTPUT_LEN)))
else:
print('%r not found' % name)
PyPrint()
-
class PyLocals(gdb.Command):
'Look up the given python variable name, and print it'
+ def __init__(self, command="py-locals"):
+ gdb.Command.__init__ (self,
+ command,
+ gdb.COMMAND_DATA,
+ gdb.COMPLETE_NONE)
+
def invoke(self, args, from_tty):
name = str(args)
@@ -1790,6 +1933,18 @@
def get_namespace(self, pyop_frame):
return pyop_frame.iter_locals()
+PyLocals()
+
+
+##################################################################
+## added, not in CPython
+##################################################################
+
+import re
+import warnings
+import tempfile
+import textwrap
+import itertools
class PyGlobals(PyLocals):
'List all the globals in the currently select Python frame'
@@ -1798,8 +1953,7 @@
return pyop_frame.iter_globals()
-PyLocals("py-locals", gdb.COMMAND_DATA, gdb.COMPLETE_NONE)
-PyGlobals("py-globals", gdb.COMMAND_DATA, gdb.COMPLETE_NONE)
+PyGlobals("py-globals")
class PyNameEquals(gdb.Function):
@@ -1868,14 +2022,13 @@
"""
def __init__(self):
- self.fd, self.filename = tempfile.mkstemp()
- self.file = os.fdopen(self.fd, 'r+')
+ f = tempfile.NamedTemporaryFile('r+')
+ self.file = f
+ self.filename = f.name
+ self.fd = f.fileno()
_execute("set logging file %s" % self.filename)
self.file_position_stack = []
- atexit.register(os.close, self.fd)
- atexit.register(os.remove, self.filename)
-
def __enter__(self):
if not self.file_position_stack:
_execute("set logging redirect on")
@@ -2333,7 +2486,7 @@
def _pointervalue(gdbval):
"""
- Return the value of the pionter as a Python int.
+ Return the value of the pointer as a Python int.
gdbval.type must be a pointer type
"""
@@ -2441,7 +2594,7 @@
inferior.
Of course, executing any code in the inferior may be dangerous and may
- leave the debuggee in an unsafe state or terminate it alltogether.
+ leave the debuggee in an unsafe state or terminate it altogether.
"""
if '\0' in code:
raise gdb.GdbError("String contains NUL byte.")
diff -Nru cython-0.26.1/Cython/Debugger/Tests/test_libcython_in_gdb.py cython-0.29.14/Cython/Debugger/Tests/test_libcython_in_gdb.py
--- cython-0.26.1/Cython/Debugger/Tests/test_libcython_in_gdb.py 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Debugger/Tests/test_libcython_in_gdb.py 2018-09-22 14:18:56.000000000 +0000
@@ -39,8 +39,8 @@
try:
return func(self, *args, **kwargs)
- except Exception as e:
- _debug("An exception occurred:", traceback.format_exc(e))
+ except Exception:
+ _debug("An exception occurred:", traceback.format_exc())
raise
return wrapper
diff -Nru cython-0.26.1/Cython/Debugger/Tests/TestLibCython.py cython-0.29.14/Cython/Debugger/Tests/TestLibCython.py
--- cython-0.26.1/Cython/Debugger/Tests/TestLibCython.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Debugger/Tests/TestLibCython.py 2019-06-30 06:50:51.000000000 +0000
@@ -40,7 +40,7 @@
else:
stdout, _ = p.communicate()
# Based on Lib/test/test_gdb.py
- regex = "GNU gdb [^\d]*(\d+)\.(\d+)"
+ regex = r"GNU gdb [^\d]*(\d+)\.(\d+)"
gdb_version = re.match(regex, stdout.decode('ascii', 'ignore'))
if gdb_version:
diff -Nru cython-0.26.1/Cython/Debugger/Tests/test_libpython_in_gdb.py cython-0.29.14/Cython/Debugger/Tests/test_libpython_in_gdb.py
--- cython-0.26.1/Cython/Debugger/Tests/test_libpython_in_gdb.py 2015-09-10 16:25:36.000000000 +0000
+++ cython-0.29.14/Cython/Debugger/Tests/test_libpython_in_gdb.py 2018-09-22 14:18:56.000000000 +0000
@@ -56,28 +56,28 @@
else:
funcname = 'PyBytes_FromStringAndSize'
- assert '"' not in string
+ assert b'"' not in string
# ensure double quotes
- code = '(PyObject *) %s("%s", %d)' % (funcname, string, len(string))
+ code = '(PyObject *) %s("%s", %d)' % (funcname, string.decode('iso8859-1'), len(string))
return self.pyobject_fromcode(code, gdbvar=gdbvar)
def alloc_unicodestring(self, string, gdbvar=None):
- self.alloc_bytestring(string.encode('UTF-8'), gdbvar='_temp')
-
postfix = libpython.get_inferior_unicode_postfix()
- funcname = 'PyUnicode%s_FromEncodedObject' % (postfix,)
+ funcname = 'PyUnicode%s_DecodeUnicodeEscape' % (postfix,)
+ data = string.encode("unicode_escape").decode('iso8859-1')
return self.pyobject_fromcode(
- '(PyObject *) %s($_temp, "UTF-8", "strict")' % funcname,
+ '(PyObject *) %s("%s", %d, "strict")' % (
+ funcname, data.replace('"', r'\"').replace('\\', r'\\'), len(data)),
gdbvar=gdbvar)
def test_bytestring(self):
- bytestring = self.alloc_bytestring("spam")
+ bytestring = self.alloc_bytestring(b"spam")
if inferior_python_version < (3, 0):
bytestring_class = libpython.PyStringObjectPtr
- expected = repr("spam")
+ expected = repr(b"spam")
else:
bytestring_class = libpython.PyBytesObjectPtr
expected = "b'spam'"
@@ -88,7 +88,7 @@
def test_unicode(self):
unicode_string = self.alloc_unicodestring(u"spam ἄλφα")
- expected = "'spam ἄλφα'"
+ expected = u"'spam ἄλφα'"
if inferior_python_version < (3, 0):
expected = 'u' + expected
diff -Nru cython-0.26.1/Cython/Distutils/build_ext.py cython-0.29.14/Cython/Distutils/build_ext.py
--- cython-0.26.1/Cython/Distutils/build_ext.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Distutils/build_ext.py 2018-09-22 14:18:56.000000000 +0000
@@ -14,9 +14,11 @@
class new_build_ext(_build_ext, object):
def finalize_options(self):
if self.distribution.ext_modules:
+ nthreads = getattr(self, 'parallel', None) # -j option in Py3.5+
+ nthreads = int(nthreads) if nthreads else None
from Cython.Build.Dependencies import cythonize
self.distribution.ext_modules[:] = cythonize(
- self.distribution.ext_modules)
+ self.distribution.ext_modules, nthreads=nthreads, force=self.force)
super(new_build_ext, self).finalize_options()
# This will become new_build_ext in the future.
diff -Nru cython-0.26.1/Cython/Distutils/old_build_ext.py cython-0.29.14/Cython/Distutils/old_build_ext.py
--- cython-0.26.1/Cython/Distutils/old_build_ext.py 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Distutils/old_build_ext.py 2019-11-01 14:13:39.000000000 +0000
@@ -1,9 +1,10 @@
"""Cython.Distutils.old_build_ext
Implements a version of the Distutils 'build_ext' command, for
-building Cython extension modules."""
+building Cython extension modules.
-# This module should be kept compatible with Python 2.3.
+Note that this module is deprecated. Use cythonize() instead.
+"""
__revision__ = "$Id:$"
@@ -190,7 +191,8 @@
for ext in self.extensions:
ext.sources = self.cython_sources(ext.sources, ext)
- self.build_extension(ext)
+ # Call original build_extensions
+ _build_ext.build_ext.build_extensions(self)
def cython_sources(self, sources, extension):
"""
diff -Nru cython-0.26.1/Cython/Includes/cpython/array.pxd cython-0.29.14/Cython/Includes/cpython/array.pxd
--- cython-0.26.1/Cython/Includes/cpython/array.pxd 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Includes/cpython/array.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -59,7 +59,7 @@
ctypedef object GETF(array a, Py_ssize_t ix)
ctypedef object SETF(array a, Py_ssize_t ix, object o)
ctypedef struct arraydescr: # [object arraydescr]:
- int typecode
+ char typecode
int itemsize
GETF getitem # PyObject * (*getitem)(struct arrayobject *, Py_ssize_t);
SETF setitem # int (*setitem)(struct arrayobject *, Py_ssize_t, PyObject *);
@@ -92,7 +92,7 @@
def __getbuffer__(self, Py_buffer* info, int flags):
# This implementation of getbuffer is geared towards Cython
- # requirements, and does not yet fullfill the PEP.
+ # requirements, and does not yet fulfill the PEP.
# In particular strided access is always provided regardless
# of flags
item_count = Py_SIZE(self)
@@ -143,7 +143,7 @@
return op
cdef inline int extend_buffer(array self, char* stuff, Py_ssize_t n) except -1:
- """ efficent appending of new stuff of same type
+ """ efficient appending of new stuff of same type
(e.g. of same array type)
n: number of elements (not number of bytes!) """
cdef Py_ssize_t itemsize = self.ob_descr.itemsize
diff -Nru cython-0.26.1/Cython/Includes/cpython/bytearray.pxd cython-0.29.14/Cython/Includes/cpython/bytearray.pxd
--- cython-0.26.1/Cython/Includes/cpython/bytearray.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/Cython/Includes/cpython/bytearray.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,33 @@
+from .object cimport PyObject
+
+cdef extern from "Python.h":
+ bint PyByteArray_Check(object o)
+ # Return true if the object o is a bytearray object or an instance of a subtype of the bytearray type.
+
+ bint PyByteArray_CheckExact(object o)
+ # Return true if the object o is a bytearray object, but not an instance of a subtype of the bytearray type.
+
+ bytearray PyByteArray_FromObject(object o)
+ # Return a new bytearray object from any object, o, that implements the buffer protocol.
+
+ bytearray PyByteArray_FromStringAndSize(char *string, Py_ssize_t len)
+ # Create a new bytearray object from string and its length, len. On failure, NULL is returned.
+
+ bytearray PyByteArray_Concat(object a, object b)
+ # Concat bytearrays a and b and return a new bytearray with the result.
+
+ Py_ssize_t PyByteArray_Size(object bytearray)
+ # Return the size of bytearray after checking for a NULL pointer.
+
+ char* PyByteArray_AsString(object bytearray)
+ # Return the contents of bytearray as a char array after checking for a NULL pointer.
+ # The returned array always has an extra null byte appended.
+
+ int PyByteArray_Resize(object bytearray, Py_ssize_t len)
+ # Resize the internal buffer of bytearray to len.
+
+ char* PyByteArray_AS_STRING(object bytearray)
+ # Macro version of PyByteArray_AsString().
+
+ Py_ssize_t PyByteArray_GET_SIZE(object bytearray)
+ # Macro version of PyByteArray_Size().
diff -Nru cython-0.26.1/Cython/Includes/cpython/ceval.pxd cython-0.29.14/Cython/Includes/cpython/ceval.pxd
--- cython-0.26.1/Cython/Includes/cpython/ceval.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/Cython/Includes/cpython/ceval.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,8 @@
+
+cdef extern from "Python.h":
+
+ void PyEval_InitThreads()
+ # Initialize and acquire the global interpreter lock.
+
+ int PyEval_ThreadsInitialized()
+ # Returns a non-zero value if PyEval_InitThreads() has been called.
diff -Nru cython-0.26.1/Cython/Includes/cpython/dict.pxd cython-0.29.14/Cython/Includes/cpython/dict.pxd
--- cython-0.26.1/Cython/Includes/cpython/dict.pxd 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Includes/cpython/dict.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -52,7 +52,7 @@
# be hashable; if it isn't, TypeError will be raised. Return 0 on
# success or -1 on failure.
- int PyDict_SetItemString(object p, char *key, object val) except -1
+ int PyDict_SetItemString(object p, const char *key, object val) except -1
# Insert value into the dictionary p using key as a key. key
# should be a char*. The key object is created using
# PyString_FromString(key). Return 0 on success or -1 on failure.
@@ -62,7 +62,7 @@
# hashable; if it isn't, TypeError is raised. Return 0 on success
# or -1 on failure.
- int PyDict_DelItemString(object p, char *key) except -1
+ int PyDict_DelItemString(object p, const char *key) except -1
# Remove the entry in dictionary p which has a key specified by
# the string key. Return 0 on success or -1 on failure.
@@ -72,7 +72,7 @@
# NULL if the key key is not present, but without setting an
# exception.
- PyObject* PyDict_GetItemString(object p, char *key)
+ PyObject* PyDict_GetItemString(object p, const char *key)
# Return value: Borrowed reference.
# This is the same as PyDict_GetItem(), but key is specified as a
# char*, rather than a PyObject*.
diff -Nru cython-0.26.1/Cython/Includes/cpython/exc.pxd cython-0.29.14/Cython/Includes/cpython/exc.pxd
--- cython-0.26.1/Cython/Includes/cpython/exc.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Includes/cpython/exc.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -153,6 +153,13 @@
# PyErr_SetFromErrno(type);" when the system call returns an
# error.
+ PyObject* PyErr_SetFromErrnoWithFilenameObject(object type, object filenameObject) except NULL
+ # Similar to PyErr_SetFromErrno(), with the additional behavior
+ # that if filenameObject is not NULL, it is passed to the
+ # constructor of type as a third parameter.
+ # In the case of OSError exception, this is used to define
+ # the filename attribute of the exception instance.
+
PyObject* PyErr_SetFromErrnoWithFilename(object type, char *filename) except NULL
# Return value: Always NULL. Similar to PyErr_SetFromErrno(),
# with the additional behavior that if filename is not NULL, it is
diff -Nru cython-0.26.1/Cython/Includes/cpython/__init__.pxd cython-0.29.14/Cython/Includes/cpython/__init__.pxd
--- cython-0.26.1/Cython/Includes/cpython/__init__.pxd 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Includes/cpython/__init__.pxd 2018-09-22 14:18:56.000000000 +0000
@@ -10,13 +10,13 @@
# Read http://docs.python.org/api/refcounts.html which is so
# important I've copied it below.
#
-# For all the declaration below, whenver the Py_ function returns
+# For all the declaration below, whenever the Py_ function returns
# a *new reference* to a PyObject*, the return type is "object".
# When the function returns a borrowed reference, the return
# type is PyObject*. When Cython sees "object" as a return type
# it doesn't increment the reference count. When it sees PyObject*
# in order to use the result you must explicitly cast to ,
-# and when you do that Cython increments the reference count wether
+# and when you do that Cython increments the reference count whether
# you want it to or not, forcing you to an explicit DECREF (or leak memory).
# To avoid this we make the above convention. Note, you can
# always locally override this convention by putting something like
diff -Nru cython-0.26.1/Cython/Includes/cpython/longintrepr.pxd cython-0.29.14/Cython/Includes/cpython/longintrepr.pxd
--- cython-0.26.1/Cython/Includes/cpython/longintrepr.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Includes/cpython/longintrepr.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -2,14 +2,13 @@
# This is not part of Python's published API.
cdef extern from "longintrepr.h":
- # Add explicit cast to avoid compiler warnings
- cdef _PyLong_New "(PyObject*)_PyLong_New"(Py_ssize_t s)
-
ctypedef unsigned int digit
ctypedef int sdigit # Python >= 2.7 only
- ctypedef struct PyLongObject:
- digit* ob_digit
+ ctypedef class __builtin__.py_long [object PyLongObject]:
+ cdef digit* ob_digit
+
+ cdef py_long _PyLong_New(Py_ssize_t s)
cdef long PyLong_SHIFT
cdef digit PyLong_BASE
diff -Nru cython-0.26.1/Cython/Includes/cpython/long.pxd cython-0.29.14/Cython/Includes/cpython/long.pxd
--- cython-0.26.1/Cython/Includes/cpython/long.pxd 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Includes/cpython/long.pxd 2018-09-22 14:18:56.000000000 +0000
@@ -146,4 +146,4 @@
# pointer. If pylong cannot be converted, an OverflowError will be
# raised. This is only assured to produce a usable void pointer
# for values created with PyLong_FromVoidPtr(). For values outside
- # 0..LONG_MAX, both signed and unsigned integers are acccepted.
+ # 0..LONG_MAX, both signed and unsigned integers are accepted.
diff -Nru cython-0.26.1/Cython/Includes/cpython/memoryview.pxd cython-0.29.14/Cython/Includes/cpython/memoryview.pxd
--- cython-0.26.1/Cython/Includes/cpython/memoryview.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/Cython/Includes/cpython/memoryview.pxd 2019-11-01 14:13:39.000000000 +0000
@@ -0,0 +1,50 @@
+cdef extern from "Python.h":
+
+ ###########################################################################
+ # MemoryView Objects
+ ###########################################################################
+ # A memoryview object exposes the C level buffer interface as a Python
+ # object which can then be passed around like any other object
+
+ object PyMemoryView_FromObject(object obj)
+ # Return value: New reference.
+ # Create a memoryview object from an object that provides the buffer
+ # interface. If obj supports writable buffer exports, the memoryview object
+ # will be read/write, otherwise it may be either read-only or read/write at
+ # the discretion of the exporter.
+
+ object PyMemoryView_FromMemory(char *mem, Py_ssize_t size, int flags)
+ # Return value: New reference.
+ # Create a memoryview object using mem as the underlying buffer. flags can
+ # be one of PyBUF_READ or PyBUF_WRITE.
+ # New in version 3.3.
+
+ object PyMemoryView_FromBuffer(Py_buffer *view)
+ # Return value: New reference.
+ # Create a memoryview object wrapping the given buffer structure view. For
+ # simple byte buffers, PyMemoryView_FromMemory() is the preferred function.
+
+ object PyMemoryView_GetContiguous(object obj,
+ int buffertype,
+ char order)
+ # Return value: New reference.
+ # Create a memoryview object to a contiguous chunk of memory (in either ‘C’
+ # or ‘F’ortran order) from an object that defines the buffer interface. If
+ # memory is contiguous, the memoryview object points to the original
+ # memory. Otherwise, a copy is made and the memoryview points to a new
+ # bytes object.
+
+ bint PyMemoryView_Check(object obj)
+ # Return true if the object obj is a memoryview object. It is not currently
+ # allowed to create subclasses of memoryview.
+
+ Py_buffer *PyMemoryView_GET_BUFFER(object mview)
+ # Return a pointer to the memoryview’s private copy of the exporter’s
+ # buffer. mview must be a memoryview instance; this macro doesn’t check its
+ # type, you must do it yourself or you will risk crashes.
+
+ Py_buffer *PyMemoryView_GET_BASE(object mview)
+ # Return either a pointer to the exporting object that the memoryview is
+ # based on or NULL if the memoryview has been created by one of the
+ # functions PyMemoryView_FromMemory() or PyMemoryView_FromBuffer(). mview
+ # must be a memoryview instance.
diff -Nru cython-0.26.1/Cython/Includes/cpython/mem.pxd cython-0.29.14/Cython/Includes/cpython/mem.pxd
--- cython-0.26.1/Cython/Includes/cpython/mem.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Includes/cpython/mem.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -27,6 +27,7 @@
# available for allocating and releasing memory from the Python
# heap:
+ void* PyMem_RawMalloc(size_t n) nogil
void* PyMem_Malloc(size_t n)
# Allocates n bytes and returns a pointer of type void* to the
# allocated memory, or NULL if the request fails. Requesting zero
@@ -34,6 +35,7 @@
# PyMem_Malloc(1) had been called instead. The memory will not
# have been initialized in any way.
+ void* PyMem_RawRealloc(void *p, size_t n) nogil
void* PyMem_Realloc(void *p, size_t n)
# Resizes the memory block pointed to by p to n bytes. The
# contents will be unchanged to the minimum of the old and the new
@@ -43,6 +45,7 @@
# NULL, it must have been returned by a previous call to
# PyMem_Malloc() or PyMem_Realloc().
+ void PyMem_RawFree(void *p) nogil
void PyMem_Free(void *p)
# Frees the memory block pointed to by p, which must have been
# returned by a previous call to PyMem_Malloc() or
diff -Nru cython-0.26.1/Cython/Includes/cpython/module.pxd cython-0.29.14/Cython/Includes/cpython/module.pxd
--- cython-0.26.1/Cython/Includes/cpython/module.pxd 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Includes/cpython/module.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -6,7 +6,7 @@
#####################################################################
# 5.3 Importing Modules
#####################################################################
- object PyImport_ImportModule(char *name)
+ object PyImport_ImportModule(const char *name)
# Return value: New reference.
# This is a simplified interface to PyImport_ImportModuleEx()
# below, leaving the globals and locals arguments set to
@@ -20,7 +20,7 @@
# loaded.) Return a new reference to the imported module, or NULL
# with an exception set on failure.
- object PyImport_ImportModuleEx(char *name, object globals, object locals, object fromlist)
+ object PyImport_ImportModuleEx(const char *name, object globals, object locals, object fromlist)
# Return value: New reference.
# Import a module. This is best described by referring to the
@@ -64,7 +64,7 @@
# the reloaded module, or NULL with an exception set on failure
# (the module still exists in this case).
- PyObject* PyImport_AddModule(char *name) except NULL
+ PyObject* PyImport_AddModule(const char *name) except NULL
# Return value: Borrowed reference.
# Return the module object corresponding to a module name. The
# name argument may be of the form package.module. First check the
@@ -145,7 +145,7 @@
bint PyModule_CheckExact(object p)
# Return true if p is a module object, but not a subtype of PyModule_Type.
- object PyModule_New(char *name)
+ object PyModule_New(const char *name)
# Return value: New reference.
# Return a new module object with the __name__ attribute set to
# name. Only the module's __doc__ and __name__ attributes are
@@ -170,18 +170,18 @@
# module's __file__ attribute. If this is not defined, or if it is
# not a string, raise SystemError and return NULL.
- int PyModule_AddObject(object module, char *name, object value) except -1
+ int PyModule_AddObject(object module, const char *name, object value) except -1
# Add an object to module as name. This is a convenience function
# which can be used from the module's initialization
# function. This steals a reference to value. Return -1 on error,
# 0 on success.
- int PyModule_AddIntConstant(object module, char *name, long value) except -1
+ int PyModule_AddIntConstant(object module, const char *name, long value) except -1
# Add an integer constant to module as name. This convenience
# function can be used from the module's initialization
# function. Return -1 on error, 0 on success.
- int PyModule_AddStringConstant(object module, char *name, char *value) except -1
+ int PyModule_AddStringConstant(object module, const char *name, const char *value) except -1
# Add a string constant to module as name. This convenience
# function can be used from the module's initialization
# function. The string value must be null-terminated. Return -1 on
diff -Nru cython-0.26.1/Cython/Includes/cpython/object.pxd cython-0.29.14/Cython/Includes/cpython/object.pxd
--- cython-0.26.1/Cython/Includes/cpython/object.pxd 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Includes/cpython/object.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -10,26 +10,27 @@
ctypedef object (*unaryfunc)(object)
ctypedef object (*binaryfunc)(object, object)
ctypedef object (*ternaryfunc)(object, object, object)
- ctypedef int (*inquiry)(object)
- ctypedef Py_ssize_t (*lenfunc)(object)
+ ctypedef int (*inquiry)(object) except -1
+ ctypedef Py_ssize_t (*lenfunc)(object) except -1
ctypedef object (*ssizeargfunc)(object, Py_ssize_t)
ctypedef object (*ssizessizeargfunc)(object, Py_ssize_t, Py_ssize_t)
- ctypedef int (*ssizeobjargproc)(object, Py_ssize_t, object)
- ctypedef int (*ssizessizeobjargproc)(object, Py_ssize_t, Py_ssize_t, object)
- ctypedef int (*objobjargproc)(object, object, object)
- ctypedef int (*objobjproc)(object, object)
+ ctypedef int (*ssizeobjargproc)(object, Py_ssize_t, object) except -1
+ ctypedef int (*ssizessizeobjargproc)(object, Py_ssize_t, Py_ssize_t, object) except -1
+ ctypedef int (*objobjargproc)(object, object, object) except -1
+ ctypedef int (*objobjproc)(object, object) except -1
- ctypedef Py_hash_t (*hashfunc)(object)
+ ctypedef Py_hash_t (*hashfunc)(object) except -1
ctypedef object (*reprfunc)(object)
- ctypedef int (*cmpfunc)(object, object)
+ ctypedef int (*cmpfunc)(object, object) except -2
ctypedef object (*richcmpfunc)(object, object, int)
# The following functions use 'PyObject*' as first argument instead of 'object' to prevent
# accidental reference counting when calling them during a garbage collection run.
ctypedef void (*destructor)(PyObject*)
- ctypedef int (*visitproc)(PyObject*, void *)
- ctypedef int (*traverseproc)(PyObject*, visitproc, void*)
+ ctypedef int (*visitproc)(PyObject*, void *) except -1
+ ctypedef int (*traverseproc)(PyObject*, visitproc, void*) except -1
+ ctypedef void (*freefunc)(void*)
ctypedef object (*descrgetfunc)(object, object, object)
ctypedef int (*descrsetfunc)(object, object, object) except -1
@@ -46,6 +47,7 @@
destructor tp_dealloc
traverseproc tp_traverse
inquiry tp_clear
+ freefunc tp_free
ternaryfunc tp_call
hashfunc tp_hash
@@ -80,12 +82,12 @@
# option currently supported is Py_PRINT_RAW; if given, the str()
# of the object is written instead of the repr().
- bint PyObject_HasAttrString(object o, char *attr_name)
+ bint PyObject_HasAttrString(object o, const char *attr_name)
# Returns 1 if o has the attribute attr_name, and 0
# otherwise. This is equivalent to the Python expression
# "hasattr(o, attr_name)". This function always succeeds.
- object PyObject_GetAttrString(object o, char *attr_name)
+ object PyObject_GetAttrString(object o, const char *attr_name)
# Return value: New reference. Retrieve an attribute named
# attr_name from object o. Returns the attribute value on success,
# or NULL on failure. This is the equivalent of the Python
@@ -104,7 +106,7 @@
object PyObject_GenericGetAttr(object o, object attr_name)
- int PyObject_SetAttrString(object o, char *attr_name, object v) except -1
+ int PyObject_SetAttrString(object o, const char *attr_name, object v) except -1
# Set the value of the attribute named attr_name, for object o, to
# the value v. Returns -1 on failure. This is the equivalent of
# the Python statement "o.attr_name = v".
@@ -116,7 +118,7 @@
int PyObject_GenericSetAttr(object o, object attr_name, object v) except -1
- int PyObject_DelAttrString(object o, char *attr_name) except -1
+ int PyObject_DelAttrString(object o, const char *attr_name) except -1
# Delete attribute named attr_name, for object o. Returns -1 on
# failure. This is the equivalent of the Python statement: "del
# o.attr_name".
diff -Nru cython-0.26.1/Cython/Includes/cpython/pylifecycle.pxd cython-0.29.14/Cython/Includes/cpython/pylifecycle.pxd
--- cython-0.26.1/Cython/Includes/cpython/pylifecycle.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/Cython/Includes/cpython/pylifecycle.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,68 @@
+# Interfaces to configure, query, create & destroy the Python runtime
+
+from libc.stdio cimport FILE
+from .pystate cimport PyThreadState
+
+
+cdef extern from "Python.h":
+ ctypedef int wchar_t
+
+ void Py_SetProgramName(wchar_t *)
+ wchar_t *Py_GetProgramName()
+
+ void Py_SetPythonHome(wchar_t *)
+ wchar_t *Py_GetPythonHome()
+
+ # Only used by applications that embed the interpreter and need to
+ # override the standard encoding determination mechanism
+ int Py_SetStandardStreamEncoding(const char *encoding, const char *errors)
+
+ void Py_Initialize()
+ void Py_InitializeEx(int)
+ void _Py_InitializeEx_Private(int, int)
+ void Py_Finalize()
+ int Py_FinalizeEx()
+ int Py_IsInitialized()
+ PyThreadState *Py_NewInterpreter()
+ void Py_EndInterpreter(PyThreadState *)
+
+
+ # Py_PyAtExit is for the atexit module, Py_AtExit is for low-level
+ # exit functions.
+ void _Py_PyAtExit(void (*func)())
+ int Py_AtExit(void (*func)())
+
+ void Py_Exit(int)
+
+ # Restore signals that the interpreter has called SIG_IGN on to SIG_DFL.
+ void _Py_RestoreSignals()
+
+ int Py_FdIsInteractive(FILE *, const char *)
+
+ # Bootstrap __main__ (defined in Modules/main.c)
+ int Py_Main(int argc, wchar_t **argv)
+
+ # In getpath.c
+ wchar_t *Py_GetProgramFullPath()
+ wchar_t *Py_GetPrefix()
+ wchar_t *Py_GetExecPrefix()
+ wchar_t *Py_GetPath()
+ void Py_SetPath(const wchar_t *)
+ int _Py_CheckPython3()
+
+ # In their own files
+ const char *Py_GetVersion()
+ const char *Py_GetPlatform()
+ const char *Py_GetCopyright()
+ const char *Py_GetCompiler()
+ const char *Py_GetBuildInfo()
+ const char *_Py_gitidentifier()
+ const char *_Py_gitversion()
+
+ ctypedef void (*PyOS_sighandler_t)(int)
+ PyOS_sighandler_t PyOS_getsig(int)
+ PyOS_sighandler_t PyOS_setsig(int, PyOS_sighandler_t)
+
+ # Random
+ int _PyOS_URandom(void *buffer, Py_ssize_t size)
+ int _PyOS_URandomNonblock(void *buffer, Py_ssize_t size)
diff -Nru cython-0.26.1/Cython/Includes/cpython/pystate.pxd cython-0.29.14/Cython/Includes/cpython/pystate.pxd
--- cython-0.26.1/Cython/Includes/cpython/pystate.pxd 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Includes/cpython/pystate.pxd 2019-07-26 12:09:39.000000000 +0000
@@ -4,9 +4,11 @@
cdef extern from "Python.h":
- # We make these an opague types. If the user wants specific attributes,
+ # We make these an opaque types. If the user wants specific attributes,
# they can be declared manually.
+ ctypedef long PY_INT64_T # FIXME: Py2.7+, not defined here but used here
+
ctypedef struct PyInterpreterState:
pass
@@ -18,7 +20,8 @@
# This is not actually a struct, but make sure it can never be coerced to
# an int or used in arithmetic expressions
- ctypedef struct PyGILState_STATE
+ ctypedef struct PyGILState_STATE:
+ pass
# The type of the trace function registered using PyEval_SetProfile() and
# PyEval_SetTrace().
@@ -39,13 +42,14 @@
PyInterpreterState * PyInterpreterState_New()
void PyInterpreterState_Clear(PyInterpreterState *)
void PyInterpreterState_Delete(PyInterpreterState *)
+ PY_INT64_T PyInterpreterState_GetID(PyInterpreterState *)
PyThreadState * PyThreadState_New(PyInterpreterState *)
void PyThreadState_Clear(PyThreadState *)
void PyThreadState_Delete(PyThreadState *)
PyThreadState * PyThreadState_Get()
- PyThreadState * PyThreadState_Swap(PyThreadState *)
+ PyThreadState * PyThreadState_Swap(PyThreadState *) # NOTE: DO NOT USE IN CYTHON CODE !
PyObject * PyThreadState_GetDict()
int PyThreadState_SetAsyncExc(long, PyObject *)
diff -Nru cython-0.26.1/Cython/Includes/cpython/pythread.pxd cython-0.29.14/Cython/Includes/cpython/pythread.pxd
--- cython-0.26.1/Cython/Includes/cpython/pythread.pxd 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Includes/cpython/pythread.pxd 2018-09-22 14:18:56.000000000 +0000
@@ -6,9 +6,11 @@
ctypedef void *PyThread_type_sema
void PyThread_init_thread()
- long PyThread_start_new_thread(void (*)(void *), void *)
+ long PyThread_start_new_thread(void (*)(void *), void *) # FIXME: legacy
+ #unsigned long PyThread_start_new_thread(void (*)(void *), void *) # returned 'long' before Py3.7
void PyThread_exit_thread()
- long PyThread_get_thread_ident()
+ long PyThread_get_thread_ident() # FIXME: legacy
+ #unsigned long PyThread_get_thread_ident() # returned 'long' before Py3.7
PyThread_type_lock PyThread_allocate_lock()
void PyThread_free_lock(PyThread_type_lock)
@@ -29,7 +31,7 @@
size_t PyThread_get_stacksize()
int PyThread_set_stacksize(size_t)
- # Thread Local Storage (TLS) API
+ # Thread Local Storage (TLS) API deprecated in CPython 3.7+
int PyThread_create_key()
void PyThread_delete_key(int)
int PyThread_set_key_value(int, void *)
@@ -38,3 +40,14 @@
# Cleanup after a fork
void PyThread_ReInitTLS()
+
+ # Thread Specific Storage (TSS) API in CPython 3.7+ (also backported)
+ #ctypedef struct Py_tss_t: pass # Cython built-in type
+ Py_tss_t Py_tss_NEEDS_INIT # Not normally useful: Cython auto-initialises declared "Py_tss_t" variables.
+ Py_tss_t * PyThread_tss_alloc()
+ void PyThread_tss_free(Py_tss_t *key)
+ int PyThread_tss_is_created(Py_tss_t *key)
+ int PyThread_tss_create(Py_tss_t *key)
+ void PyThread_tss_delete(Py_tss_t *key)
+ int PyThread_tss_set(Py_tss_t *key, void *value)
+ void * PyThread_tss_get(Py_tss_t *key)
diff -Nru cython-0.26.1/Cython/Includes/cpython/set.pxd cython-0.29.14/Cython/Includes/cpython/set.pxd
--- cython-0.26.1/Cython/Includes/cpython/set.pxd 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Includes/cpython/set.pxd 2018-09-22 14:18:56.000000000 +0000
@@ -44,9 +44,15 @@
# Return true if p is a set object or a frozenset object but not
# an instance of a subtype.
+ bint PyFrozenSet_Check(object p)
+ # Return true if p is a frozenset object or an instance of a subtype.
+
bint PyFrozenSet_CheckExact(object p)
# Return true if p is a frozenset object but not an instance of a subtype.
+ bint PySet_Check(object p)
+ # Return true if p is a set object or an instance of a subtype.
+
object PySet_New(object iterable)
# Return value: New reference.
# Return a new set containing objects returned by the
diff -Nru cython-0.26.1/Cython/Includes/cpython/weakref.pxd cython-0.29.14/Cython/Includes/cpython/weakref.pxd
--- cython-0.26.1/Cython/Includes/cpython/weakref.pxd 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Includes/cpython/weakref.pxd 2018-09-22 14:18:56.000000000 +0000
@@ -33,7 +33,7 @@
# a weakly-referencable object, or if callback is not callable,
# None, or NULL, this will return NULL and raise TypeError.
- PyObject* PyWeakref_GetObject(object ref)
+ PyObject* PyWeakref_GetObject(object ref) except NULL
# Return the referenced object from a weak reference, ref. If the
# referent is no longer live, returns None.
diff -Nru cython-0.26.1/Cython/Includes/Deprecated/python2.5.pxd cython-0.29.14/Cython/Includes/Deprecated/python2.5.pxd
--- cython-0.26.1/Cython/Includes/Deprecated/python2.5.pxd 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Includes/Deprecated/python2.5.pxd 1970-01-01 00:00:00.000000000 +0000
@@ -1,622 +0,0 @@
-# From: Eric Huss
-#
-# Here is my latest copy. It does not cover 100% of the API. It should be
-# current up to 2.5.
-#
-# -Eric
-
-
-
-
-# XXX:
-# - Need to support "long long" definitions that are different for different platforms.
-# - Support unicode platform dependencies.
-# - Add unicode calls.
-# - Add setobject calls.
-
-cdef extern from "stdio.h":
- ctypedef struct FILE:
- pass
-
-cdef extern from "Python.h":
-
- # XXX: This is platform dependent.
- ctypedef unsigned short Py_UNICODE
-
- ctypedef struct PyTypeObject:
- pass
-
- ctypedef struct PyObject:
- Py_ssize_t ob_refcnt
- PyTypeObject * ob_type
-
- ###############################################################################################
- # bool
- ###############################################################################################
- PyObject * Py_False
- PyObject * Py_True
- PyTypeObject PyBool_Type
- int PyBool_Check (object) # Always succeeds.
- object PyBool_FromLong (long)
-
- ###############################################################################################
- # buffer
- ###############################################################################################
- PyTypeObject PyBuffer_Type
- int Py_END_OF_BUFFER
- int PyBuffer_Check (object) # Always succeeds.
- object PyBuffer_FromMemory (void *, Py_ssize_t)
- object PyBuffer_FromObject (object, Py_ssize_t, Py_ssize_t)
- object PyBuffer_FromReadWriteMemory (void *, Py_ssize_t)
- object PyBuffer_FromReadWriteObject (object, Py_ssize_t, Py_ssize_t)
- object PyBuffer_New (Py_ssize_t)
- int PyObject_AsCharBuffer (object, char **, Py_ssize_t *) except -1
- int PyObject_AsReadBuffer (object, void **, Py_ssize_t *) except -1
- int PyObject_AsWriteBuffer (object, void **, Py_ssize_t *) except -1
- int PyObject_CheckReadBuffer (object) # Always succeeds.
-
- ###############################################################################################
- # cobject
- ###############################################################################################
- PyTypeObject PyCObject_Type
-
- int PyCObject_Check(object) # Always succeeds.
- object PyCObject_FromVoidPtr(void *, void (*)(void*))
- object PyCObject_FromVoidPtrAndDesc(void *, void *, void (*)(void*,void*))
- void * PyCObject_AsVoidPtr(object) except NULL
- void * PyCObject_GetDesc(object) except NULL
- void * PyCObject_Import(char *, char *) except NULL
-
- ###############################################################################################
- # compile
- ###############################################################################################
-
- ctypedef struct PyCodeObject:
- int co_argcount
- int co_nlocals
- int co_stacksize
- int co_flags
- PyObject *co_code
- PyObject *co_consts
- PyObject *co_names
- PyObject *co_varnames
- PyObject *co_freevars
- PyObject *co_cellvars
- PyObject *co_filename
- PyObject *co_name
- int co_firstlineno
- PyObject *co_lnotab
-
- int PyCode_Addr2Line(PyCodeObject *, int)
-
- ###############################################################################################
- # complex
- ###############################################################################################
- ctypedef struct Py_complex:
- double real
- double imag
-
- PyTypeObject PyComplex_Type
-
- Py_complex PyComplex_AsCComplex (object) # Always succeeds.
- int PyComplex_Check (object) # Always succeeds.
- int PyComplex_CheckExact (object) # Always succeeds.
- object PyComplex_FromCComplex (Py_complex)
- object PyComplex_FromDoubles (double, double)
- double PyComplex_ImagAsDouble (object) except? -1
- double PyComplex_RealAsDouble (object) except? -1
- Py_complex _Py_c_diff (Py_complex, Py_complex)
- Py_complex _Py_c_neg (Py_complex)
- Py_complex _Py_c_pow (Py_complex, Py_complex)
- Py_complex _Py_c_prod (Py_complex, Py_complex)
- Py_complex _Py_c_quot (Py_complex, Py_complex)
- Py_complex _Py_c_sum (Py_complex, Py_complex)
-
- ###############################################################################################
- # dict
- ###############################################################################################
- PyTypeObject PyDict_Type
-
- int PyDict_Check (object) # Always succeeds.
- int PyDict_CheckExact (object) # Always succeeds.
- void PyDict_Clear (object)
- int PyDict_Contains (object, object) except -1
- object PyDict_Copy (object)
- int PyDict_DelItem (object, object) except -1
- int PyDict_DelItemString (object, char *) except -1
- object PyDict_Items (object)
- object PyDict_Keys (object)
- int PyDict_Merge (object, object, int) except -1
- int PyDict_MergeFromSeq2 (object, object, int) except -1
- object PyDict_New ()
- # XXX: Pyrex doesn't support pointer to a python object?
- #int PyDict_Next (object, Py_ssize_t *, object *, object *) # Always succeeds.
- int PyDict_SetItem (object, object, object) except -1
- int PyDict_SetItemString (object, char *, object) except -1
- Py_ssize_t PyDict_Size (object) except -1
- int PyDict_Update (object, object) except -1
- object PyDict_Values (object)
- # XXX: Borrowed reference. No exception on NULL.
- #object PyDict_GetItem (object, object)
- # XXX: Borrowed reference. No exception on NULL
- #object PyDict_GetItemString (object, char *)
-
-
- ###############################################################################################
- # float
- ###############################################################################################
- PyTypeObject PyFloat_Type
- int _PyFloat_Pack4 (double, unsigned char *, int) except -1
- int _PyFloat_Pack8 (double, unsigned char *, int) except -1
- double _PyFloat_Unpack4 (unsigned char *, int) except? -1
- double _PyFloat_Unpack8 (unsigned char *, int) except? -1
- double PyFloat_AS_DOUBLE (object)
- double PyFloat_AsDouble (object) except? -1
- void PyFloat_AsReprString (char*, object)
- void PyFloat_AsString (char*, object)
- int PyFloat_Check (object) # Always succeeds.
- int PyFloat_CheckExact (object) # Always succeeds.
- object PyFloat_FromDouble (double)
- object PyFloat_FromString (object, char**)
-
- ###############################################################################################
- # frame
- ###############################################################################################
-
- ctypedef struct PyFrameObject:
- PyFrameObject *f_back
- PyCodeObject *f_code
- PyObject *f_builtins
- PyObject *f_globals
- PyObject *f_locals
- PyObject *f_trace
- PyObject *f_exc_type
- PyObject *f_exc_value
- PyObject *f_exc_traceback
- int f_lasti
- int f_lineno
- int f_restricted
- int f_iblock
- int f_nlocals
- int f_ncells
- int f_nfreevars
- int f_stacksize
-
- ###############################################################################################
- # int
- ###############################################################################################
- PyTypeObject PyInt_Type
- long PyInt_AS_LONG (object) # Always succeeds.
- long PyInt_AsLong (object) except? -1
- Py_ssize_t PyInt_AsSsize_t (object) except? -1
- unsigned long long PyInt_AsUnsignedLongLongMask (object) except? -1
- unsigned long PyInt_AsUnsignedLongMask (object) except? -1
- int PyInt_Check (object) # Always succeeds.
- int PyInt_CheckExact (object) # Always succeeds.
- object PyInt_FromLong (long)
- object PyInt_FromSsize_t (Py_ssize_t)
- object PyInt_FromString (char*, char**, int)
- object PyInt_FromUnicode (Py_UNICODE*, Py_ssize_t, int)
- long PyInt_GetMax () # Always succeeds.
-
- ###############################################################################################
- # iterator
- ###############################################################################################
- int PyIter_Check (object) # Always succeeds.
- object PyIter_Next (object)
-
- ###############################################################################################
- # list
- ###############################################################################################
- PyTypeObject PyList_Type
- int PyList_Append (object, object) except -1
- object PyList_AsTuple (object)
- int PyList_Check (object) # Always succeeds.
- int PyList_CheckExact (object) # Always succeeds.
- int PyList_GET_SIZE (object) # Always suceeds.
- object PyList_GetSlice (object, Py_ssize_t, Py_ssize_t)
- int PyList_Insert (object, Py_ssize_t, object) except -1
- object PyList_New (Py_ssize_t)
- int PyList_Reverse (object) except -1
- int PyList_SetSlice (object, Py_ssize_t, Py_ssize_t, object) except -1
- Py_ssize_t PyList_Size (object) except -1
- int PyList_Sort (object) except -1
-
- ###############################################################################################
- # long
- ###############################################################################################
- PyTypeObject PyLong_Type
- int _PyLong_AsByteArray (object, unsigned char *, size_t, int, int) except -1
- object _PyLong_FromByteArray (unsigned char *, size_t, int, int)
- size_t _PyLong_NumBits (object) except -1
- int _PyLong_Sign (object) # No error.
- long PyLong_AsLong (object) except? -1
- long long PyLong_AsLongLong (object) except? -1
- unsigned long PyLong_AsUnsignedLong (object) except? -1
- unsigned long PyLong_AsUnsignedLongMask (object) except? -1
- unsigned long long PyLong_AsUnsignedLongLong (object) except? -1
- unsigned long long PyLong_AsUnsignedLongLongMask (object) except? -1
- int PyLong_Check (object) # Always succeeds.
- int PyLong_CheckExact (object) # Always succeeds.
- object PyLong_FromDouble (double)
- object PyLong_FromLong (long)
- object PyLong_FromLongLong (long long)
- object PyLong_FromUnsignedLong (unsigned long)
- object PyLong_FromUnsignedLongLong (unsigned long long)
- double PyLong_AsDouble (object) except? -1
- object PyLong_FromVoidPtr (void *)
- void * PyLong_AsVoidPtr (object) except NULL
- object PyLong_FromString (char *, char **, int)
- object PyLong_FromUnicode (Py_UNICODE*, Py_ssize_t, int)
-
- ###############################################################################################
- # mapping
- ###############################################################################################
- int PyMapping_Check (object) # Always succeeds.
- int PyMapping_DelItem (object, object) except -1
- int PyMapping_DelItemString (object, char *) except -1
- object PyMapping_GetItemString (object, char *)
- int PyMapping_HasKey (object, object) # Always succeeds.
- int PyMapping_HasKeyString (object, char *) # Always succeeds.
- object PyMapping_Items (object)
- object PyMapping_Keys (object)
- Py_ssize_t PyMapping_Length (object) except -1
- int PyMapping_SetItemString (object, char *, object) except -1
- Py_ssize_t PyMapping_Size (object) except -1
- object PyMapping_Values (object)
-
- ###############################################################################################
- # mem
- ###############################################################################################
- void PyMem_Free (void * p)
- void * PyMem_Malloc (size_t n)
- void * PyMem_Realloc (void *, size_t)
-
- ###############################################################################################
- # modsupport
- ###############################################################################################
- object Py_BuildValue (char *, ...)
- object Py_VaBuildValue (char *, va_list)
-
- ###############################################################################################
- # number
- ###############################################################################################
- object PyNumber_Absolute (object)
- object PyNumber_Add (object, object)
- object PyNumber_And (object, object)
- Py_ssize_t PyNumber_AsSsize_t (object, object) except? -1
- int PyNumber_Check (object) # Always succeeds.
- # XXX: Pyrex doesn't support pointer to python object?
- #int PyNumber_Coerce (object*, object*) except -1
- object PyNumber_Divide (object, object)
- object PyNumber_Divmod (object, object)
- object PyNumber_Float (object)
- object PyNumber_FloorDivide (object, object)
- object PyNumber_InPlaceAdd (object, object)
- object PyNumber_InPlaceAnd (object, object)
- object PyNumber_InPlaceDivide (object, object)
- object PyNumber_InPlaceFloorDivide (object, object)
- object PyNumber_InPlaceLshift (object, object)
- object PyNumber_InPlaceMultiply (object, object)
- object PyNumber_InPlaceOr (object, object)
- object PyNumber_InPlacePower (object, object, object)
- object PyNumber_InPlaceRemainder (object, object)
- object PyNumber_InPlaceRshift (object, object)
- object PyNumber_InPlaceSubtract (object, object)
- object PyNumber_InPlaceTrueDivide (object, object)
- object PyNumber_InPlaceXor (object, object)
- object PyNumber_Int (object)
- object PyNumber_Invert (object)
- object PyNumber_Long (object)
- object PyNumber_Lshift (object, object)
- object PyNumber_Multiply (object, object)
- object PyNumber_Negative (object)
- object PyNumber_Or (object, object)
- object PyNumber_Positive (object)
- object PyNumber_Power (object, object, object)
- object PyNumber_Remainder (object, object)
- object PyNumber_Rshift (object, object)
- object PyNumber_Subtract (object, object)
- object PyNumber_TrueDivide (object, object)
- object PyNumber_Xor (object, object)
-
- ###############################################################################################
- # object
- ###############################################################################################
- int PyCallable_Check (object) # Always succeeds.
- int PyObject_AsFileDescriptor (object) except -1
- object PyObject_Call (object, object, object)
- object PyObject_CallFunction (object, char *, ...)
- object PyObject_CallFunctionObjArgs (object, ...)
- object PyObject_CallMethod (object, char *, char *, ...)
- object PyObject_CallMethodObjArgs (object, object, ...)
- object PyObject_CallObject (object, object)
- int PyObject_Cmp (object, object, int *result) except -1
- # Use PyObject_Cmp instead.
- #int PyObject_Compare (object, object)
- int PyObject_DelAttr (object, object) except -1
- int PyObject_DelAttrString (object, char *) except -1
- int PyObject_DelItem (object, object) except -1
- int PyObject_DelItemString (object, char *) except -1
- object PyObject_Dir (object)
- object PyObject_GetAttr (object, object)
- object PyObject_GetAttrString (object, char *)
- object PyObject_GetItem (object, object)
- object PyObject_GetIter (object)
- int PyObject_HasAttr (object, object) # Always succeeds.
- int PyObject_HasAttrString (object, char *) # Always succeeds.
- long PyObject_Hash (object) except -1
- int PyObject_IsInstance (object, object) except -1
- int PyObject_IsSubclass (object, object) except -1
- int PyObject_IsTrue (object) except -1
- Py_ssize_t PyObject_Length (object) except -1
- int PyObject_Not (object) except -1
- int PyObject_Print (object, FILE *, int) except -1
- object PyObject_Repr (object)
- object PyObject_RichCompare (object, object, int)
- int PyObject_RichCompareBool (object, object, int) except -1
- int PyObject_SetAttr (object, object, object) except -1
- int PyObject_SetAttrString (object, char *, object) except -1
- int PyObject_SetItem (object, object, object) except -1
- Py_ssize_t PyObject_Size (object) except -1
- object PyObject_Str (object)
- object PyObject_Type (object)
- int PyObject_TypeCheck (object, object) # Always succeeds.
- object PyObject_Unicode (object)
-
- ###############################################################################################
- # pyerrors
- ###############################################################################################
- int PyErr_BadArgument ()
- void PyErr_BadInternalCall ()
- int PyErr_CheckSignals ()
- void PyErr_Clear ()
- int PyErr_ExceptionMatches (object)
- object PyErr_Format (object, char *, ...)
- int PyErr_GivenExceptionMatches (object, object)
- object PyErr_NoMemory ()
- object PyErr_Occurred ()
- void PyErr_Restore (object, object, object)
- object PyErr_SetFromErrno (object)
- object PyErr_SetFromErrnoWithFilename (object, char *)
- object PyErr_SetFromErrnoWithFilenameObject (object, object)
- void PyErr_SetInterrupt ()
- void PyErr_SetNone (object)
- void PyErr_SetObject (object, object)
- void PyErr_SetString (object, char *)
- int PyErr_Warn (object, char *)
- int PyErr_WarnExplicit (object, char *, char *, int, char *, object)
- void PyErr_WriteUnraisable (object)
-
- ###############################################################################################
- # pyeval
- # Be extremely careful with these functions.
- ###############################################################################################
-
- ctypedef struct PyThreadState:
- PyFrameObject * frame
- int recursion_depth
- void * curexc_type, * curexc_value, * curexc_traceback
- void * exc_type, * exc_value, * exc_traceback
-
- void PyEval_AcquireLock ()
- void PyEval_ReleaseLock ()
- void PyEval_AcquireThread (PyThreadState *)
- void PyEval_ReleaseThread (PyThreadState *)
- PyThreadState* PyEval_SaveThread ()
- void PyEval_RestoreThread (PyThreadState *)
-
- ###############################################################################################
- # pystate
- # Be extremely careful with these functions. Read PEP 311 for more detail.
- ###############################################################################################
-
- ctypedef int PyGILState_STATE
- PyGILState_STATE PyGILState_Ensure ()
- void PyGILState_Release (PyGILState_STATE)
-
- ctypedef struct PyInterpreterState:
- pass
-
- PyThreadState* PyThreadState_New (PyInterpreterState *)
- void PyThreadState_Clear (PyThreadState *)
- void PyThreadState_Delete (PyThreadState *)
- PyThreadState* PyThreadState_Get ()
- PyThreadState* PyThreadState_Swap (PyThreadState *tstate)
- # XXX: Borrowed reference.
- #object PyThreadState_GetDict ()
-
- ###############################################################################################
- # run
- # Functions for embedded interpreters are not included.
- ###############################################################################################
- ctypedef struct PyCompilerFlags:
- int cf_flags
-
- ctypedef struct _node:
- pass
-
- ctypedef void (*PyOS_sighandler_t)(int)
-
- void PyErr_Display (object, object, object)
- void PyErr_Print ()
- void PyErr_PrintEx (int)
- char * PyOS_Readline (FILE *, FILE *, char *)
- PyOS_sighandler_t PyOS_getsig (int)
- PyOS_sighandler_t PyOS_setsig (int, PyOS_sighandler_t)
- _node * PyParser_SimpleParseFile (FILE *, char *, int) except NULL
- _node * PyParser_SimpleParseFileFlags (FILE *, char *, int,
- int) except NULL
- _node * PyParser_SimpleParseString (char *, int) except NULL
- _node * PyParser_SimpleParseStringFlagsFilename(char *, char *,
- int, int) except NULL
- _node * PyParser_SimpleParseStringFlags (char *, int, int) except NULL
- int PyRun_AnyFile (FILE *, char *) except -1
- int PyRun_AnyFileEx (FILE *, char *, int) except -1
- int PyRun_AnyFileExFlags (FILE *, char *, int,
- PyCompilerFlags *) except -1
- int PyRun_AnyFileFlags (FILE *, char *,
- PyCompilerFlags *) except -1
- object PyRun_File (FILE *, char *, int,
- object, object)
- object PyRun_FileEx (FILE *, char *, int,
- object, object, int)
- object PyRun_FileExFlags (FILE *, char *, int,
- object, object, int,
- PyCompilerFlags *)
- object PyRun_FileFlags (FILE *, char *, int,
- object, object,
- PyCompilerFlags *)
- int PyRun_InteractiveLoop (FILE *, char *) except -1
- int PyRun_InteractiveLoopFlags (FILE *, char *,
- PyCompilerFlags *) except -1
- int PyRun_InteractiveOne (FILE *, char *) except -1
- int PyRun_InteractiveOneFlags (FILE *, char *,
- PyCompilerFlags *) except -1
- int PyRun_SimpleFile (FILE *, char *) except -1
- int PyRun_SimpleFileEx (FILE *, char *, int) except -1
- int PyRun_SimpleFileExFlags (FILE *, char *, int,
- PyCompilerFlags *) except -1
- int PyRun_SimpleString (char *) except -1
- int PyRun_SimpleStringFlags (char *, PyCompilerFlags *) except -1
- object PyRun_String (char *, int, object,
- object)
- object PyRun_StringFlags (char *, int, object,
- object, PyCompilerFlags *)
- int Py_AtExit (void (*func)())
- object Py_CompileString (char *, char *, int)
- object Py_CompileStringFlags (char *, char *, int, PyCompilerFlags *)
- void Py_Exit (int)
- int Py_FdIsInteractive (FILE *, char *) # Always succeeds.
- char * Py_GetBuildInfo ()
- char * Py_GetCompiler ()
- char * Py_GetCopyright ()
- char * Py_GetExecPrefix ()
- char * Py_GetPath ()
- char * Py_GetPlatform ()
- char * Py_GetPrefix ()
- char * Py_GetProgramFullPath ()
- char * Py_GetProgramName ()
- char * Py_GetPythonHome ()
- char * Py_GetVersion ()
-
- ###############################################################################################
- # sequence
- ###############################################################################################
- int PySequence_Check (object) # Always succeeds.
- object PySequence_Concat (object, object)
- int PySequence_Contains (object, object) except -1
- Py_ssize_t PySequence_Count (object, object) except -1
- int PySequence_DelItem (object, Py_ssize_t) except -1
- int PySequence_DelSlice (object, Py_ssize_t, Py_ssize_t) except -1
- object PySequence_Fast (object, char *)
- int PySequence_Fast_GET_SIZE (object)
- object PySequence_GetItem (object, Py_ssize_t)
- object PySequence_GetSlice (object, Py_ssize_t, Py_ssize_t)
- object PySequence_ITEM (object, int)
- int PySequence_In (object, object) except -1
- object PySequence_InPlaceConcat (object, object)
- object PySequence_InPlaceRepeat (object, Py_ssize_t)
- Py_ssize_t PySequence_Index (object, object) except -1
- Py_ssize_t PySequence_Length (object) except -1
- object PySequence_List (object)
- object PySequence_Repeat (object, Py_ssize_t)
- int PySequence_SetItem (object, Py_ssize_t, object) except -1
- int PySequence_SetSlice (object, Py_ssize_t, Py_ssize_t, object) except -1
- Py_ssize_t PySequence_Size (object) except -1
- object PySequence_Tuple (object)
-
- ###############################################################################################
- # string
- ###############################################################################################
- PyTypeObject PyString_Type
- # Pyrex cannot support resizing because you have no choice but to use
- # realloc which may call free() on the object, and there's no way to tell
- # Pyrex to "forget" reference counting for the object.
- #int _PyString_Resize (object *, Py_ssize_t) except -1
- char * PyString_AS_STRING (object) # Always succeeds.
- object PyString_AsDecodedObject (object, char *, char *)
- object PyString_AsEncodedObject (object, char *, char *)
- object PyString_AsEncodedString (object, char *, char *)
- char * PyString_AsString (object) except NULL
- int PyString_AsStringAndSize (object, char **, Py_ssize_t *) except -1
- int PyString_Check (object) # Always succeeds.
- int PyString_CHECK_INTERNED (object) # Always succeeds.
- int PyString_CheckExact (object) # Always succeeds.
- # XXX: Pyrex doesn't support pointer to a python object?
- #void PyString_Concat (object *, object)
- # XXX: Pyrex doesn't support pointer to a python object?
- #void PyString_ConcatAndDel (object *, object)
- object PyString_Decode (char *, int, char *, char *)
- object PyString_DecodeEscape (char *, int, char *, int, char *)
- object PyString_Encode (char *, int, char *, char *)
- object PyString_Format (object, object)
- object PyString_FromFormat (char*, ...)
- object PyString_FromFormatV (char*, va_list)
- object PyString_FromString (char *)
- object PyString_FromStringAndSize (char *, Py_ssize_t)
- Py_ssize_t PyString_GET_SIZE (object) # Always succeeds.
- object PyString_InternFromString (char *)
- # XXX: Pyrex doesn't support pointer to a python object?
- #void PyString_InternImmortal (object*)
- # XXX: Pyrex doesn't support pointer to a python object?
- #void PyString_InternInPlace (object*)
- object PyString_Repr (object, int)
- Py_ssize_t PyString_Size (object) except -1
-
- # Disgusting hack to access internal object values.
- ctypedef struct PyStringObject:
- int ob_refcnt
- PyTypeObject * ob_type
- int ob_size
- long ob_shash
- int ob_sstate
- char * ob_sval
-
- ###############################################################################################
- # tuple
- ###############################################################################################
- PyTypeObject PyTuple_Type
- # See PyString_Resize note about resizing.
- #int _PyTuple_Resize (object*, Py_ssize_t) except -1
- int PyTuple_Check (object) # Always succeeds.
- int PyTuple_CheckExact (object) # Always succeeds.
- Py_ssize_t PyTuple_GET_SIZE (object) # Always succeeds.
- object PyTuple_GetSlice (object, Py_ssize_t, Py_ssize_t)
- object PyTuple_New (Py_ssize_t)
- object PyTuple_Pack (Py_ssize_t, ...)
- Py_ssize_t PyTuple_Size (object) except -1
-
- ###############################################################################################
- # Dangerous things!
- # Do not use these unless you really, really know what you are doing.
- ###############################################################################################
- void Py_INCREF (object)
- void Py_XINCREF (object)
- void Py_DECREF (object)
- void Py_XDECREF (object)
- void Py_CLEAR (object)
-
- # XXX: Stolen reference.
- void PyTuple_SET_ITEM (object, Py_ssize_t, value)
- # XXX: Borrowed reference.
- object PyTuple_GET_ITEM (object, Py_ssize_t)
- # XXX: Borrowed reference.
- object PyTuple_GetItem (object, Py_ssize_t)
- # XXX: Stolen reference.
- int PyTuple_SetItem (object, Py_ssize_t, object) except -1
-
- # XXX: Steals reference.
- int PyList_SetItem (object, Py_ssize_t, object) except -1
- # XXX: Borrowed reference
- object PyList_GetItem (object, Py_ssize_t)
- # XXX: Borrowed reference, no NULL on error.
- object PyList_GET_ITEM (object, Py_ssize_t)
- # XXX: Stolen reference.
- void PyList_SET_ITEM (object, Py_ssize_t, object)
-
- # XXX: Borrowed reference.
- object PySequence_Fast_GET_ITEM (object, Py_ssize_t)
-
- # First parameter _must_ be a PyStringObject.
- object _PyString_Join (object, object)
diff -Nru cython-0.26.1/Cython/Includes/libc/limits.pxd cython-0.29.14/Cython/Includes/libc/limits.pxd
--- cython-0.26.1/Cython/Includes/libc/limits.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Includes/libc/limits.pxd 2018-09-22 14:18:56.000000000 +0000
@@ -1,29 +1,28 @@
# 5.2.4.2.1 Sizes of integer types
cdef extern from "":
+ const int CHAR_BIT
+ const int MB_LEN_MAX
- enum: CHAR_BIT
- enum: MB_LEN_MAX
+ const char CHAR_MIN
+ const char CHAR_MAX
- enum: CHAR_MIN
- enum: CHAR_MAX
-
- enum: SCHAR_MIN
- enum: SCHAR_MAX
- enum: UCHAR_MAX
-
- enum: SHRT_MIN
- enum: SHRT_MAX
- enum: USHRT_MAX
-
- enum: INT_MIN
- enum: INT_MAX
- enum: UINT_MAX
-
- enum: LONG_MIN
- enum: LONG_MAX
- enum: ULONG_MAX
-
- enum: LLONG_MIN
- enum: LLONG_MAX
- enum: ULLONG_MAX
+ const signed char SCHAR_MIN
+ const signed char SCHAR_MAX
+ const unsigned char UCHAR_MAX
+
+ const short SHRT_MIN
+ const short SHRT_MAX
+ const unsigned short USHRT_MAX
+
+ const int INT_MIN
+ const int INT_MAX
+ const unsigned int UINT_MAX
+
+ const long LONG_MIN
+ const long LONG_MAX
+ const unsigned long ULONG_MAX
+
+ const long long LLONG_MIN
+ const long long LLONG_MAX
+ const unsigned long long ULLONG_MAX
diff -Nru cython-0.26.1/Cython/Includes/libc/math.pxd cython-0.29.14/Cython/Includes/libc/math.pxd
--- cython-0.26.1/Cython/Includes/libc/math.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Includes/libc/math.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -1,27 +1,27 @@
cdef extern from "" nogil:
- double M_E
- double e "M_E" # as in Python's math module
- double M_LOG2E
- double M_LOG10E
- double M_LN2
- double M_LN10
- double M_PI
- double pi "M_PI" # as in Python's math module
- double M_PI_2
- double M_PI_4
- double M_1_PI
- double M_2_PI
- double M_2_SQRTPI
- double M_SQRT2
- double M_SQRT1_2
+ const double M_E
+ const double e "M_E" # as in Python's math module
+ const double M_LOG2E
+ const double M_LOG10E
+ const double M_LN2
+ const double M_LN10
+ const double M_PI
+ const double pi "M_PI" # as in Python's math module
+ const double M_PI_2
+ const double M_PI_4
+ const double M_1_PI
+ const double M_2_PI
+ const double M_2_SQRTPI
+ const double M_SQRT2
+ const double M_SQRT1_2
# C99 constants
- float INFINITY
- float NAN
+ const float INFINITY
+ const float NAN
# note: not providing "nan" and "inf" aliases here as nan() is a function in C
- double HUGE_VAL
- float HUGE_VALF
- long double HUGE_VALL
+ const double HUGE_VAL
+ const float HUGE_VALF
+ const long double HUGE_VALL
double acos(double x)
double asin(double x)
diff -Nru cython-0.26.1/Cython/Includes/libc/signal.pxd cython-0.29.14/Cython/Includes/libc/signal.pxd
--- cython-0.26.1/Cython/Includes/libc/signal.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Includes/libc/signal.pxd 2018-09-22 14:18:56.000000000 +0000
@@ -6,13 +6,6 @@
ctypedef int sig_atomic_t
- enum: SIGABRT
- enum: SIGFPE
- enum: SIGILL
- enum: SIGINT
- enum: SIGSEGV
- enum: SIGTERM
-
sighandler_t SIG_DFL
sighandler_t SIG_IGN
sighandler_t SIG_ERR
@@ -20,49 +13,52 @@
sighandler_t signal (int signum, sighandler_t action)
int raise_"raise" (int signum)
-
-cdef extern from "" nogil:
-
- # Program Error
- enum: SIGFPE
- enum: SIGILL
- enum: SIGSEGV
- enum: SIGBUS
- enum: SIGABRT
- enum: SIGIOT
- enum: SIGTRAP
- enum: SIGEMT
- enum: SIGSYS
- # Termination
- enum: SIGTERM
- enum: SIGINT
- enum: SIGQUIT
- enum: SIGKILL
- enum: SIGHUP
- # Alarm
- enum: SIGALRM
- enum: SIGVTALRM
- enum: SIGPROF
- # Asynchronous I/O
- enum: SIGIO
- enum: SIGURG
- enum: SIGPOLL
- # Job Control
- enum: SIGCHLD
- enum: SIGCLD
- enum: SIGCONT
- enum: SIGSTOP
- enum: SIGTSTP
- enum: SIGTTIN
- enum: SIGTTOU
- # Operation Error
- enum: SIGPIPE
- enum: SIGLOST
- enum: SIGXCPU
- enum: SIGXFSZ
- # Miscellaneous
- enum: SIGUSR1
- enum: SIGUSR2
- enum: SIGWINCH
- enum: SIGINFO
-
+ # Signals
+ enum:
+ # Program Error
+ SIGFPE
+ SIGILL
+ SIGSEGV
+ SIGBUS
+ SIGABRT
+ SIGIOT
+ SIGTRAP
+ SIGEMT
+ SIGSYS
+ SIGSTKFLT
+ # Termination
+ SIGTERM
+ SIGINT
+ SIGQUIT
+ SIGKILL
+ SIGHUP
+ # Alarm
+ SIGALRM
+ SIGVTALRM
+ SIGPROF
+ # Asynchronous I/O
+ SIGIO
+ SIGURG
+ SIGPOLL
+ # Job Control
+ SIGCHLD
+ SIGCLD
+ SIGCONT
+ SIGSTOP
+ SIGTSTP
+ SIGTTIN
+ SIGTTOU
+ # Operation Error
+ SIGPIPE
+ SIGLOST
+ SIGXCPU
+ SIGXFSZ
+ SIGPWR
+ # Miscellaneous
+ SIGUSR1
+ SIGUSR2
+ SIGWINCH
+ SIGINFO
+ # Real-time signals
+ SIGRTMIN
+ SIGRTMAX
diff -Nru cython-0.26.1/Cython/Includes/libcpp/deque.pxd cython-0.29.14/Cython/Includes/libcpp/deque.pxd
--- cython-0.26.1/Cython/Includes/libcpp/deque.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Includes/libcpp/deque.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -1,21 +1,44 @@
cdef extern from "" namespace "std" nogil:
cdef cppclass deque[T,ALLOCATOR=*]:
+ ctypedef T value_type
+ ctypedef ALLOCATOR allocator_type
+
+ # these should really be allocator_type.size_type and
+ # allocator_type.difference_type to be true to the C++ definition
+ # but cython doesn't support deferred access on template arguments
+ ctypedef size_t size_type
+ ctypedef ptrdiff_t difference_type
+
cppclass iterator:
T& operator*()
iterator operator++()
iterator operator--()
+ iterator operator+(size_type)
+ iterator operator-(size_type)
+ difference_type operator-(iterator)
bint operator==(iterator)
bint operator!=(iterator)
+ bint operator<(iterator)
+ bint operator>(iterator)
+ bint operator<=(iterator)
+ bint operator>=(iterator)
cppclass reverse_iterator:
T& operator*()
- iterator operator++()
- iterator operator--()
+ reverse_iterator operator++()
+ reverse_iterator operator--()
+ reverse_iterator operator+(size_type)
+ reverse_iterator operator-(size_type)
+ difference_type operator-(reverse_iterator)
bint operator==(reverse_iterator)
bint operator!=(reverse_iterator)
+ bint operator<(reverse_iterator)
+ bint operator>(reverse_iterator)
+ bint operator<=(reverse_iterator)
+ bint operator>=(reverse_iterator)
cppclass const_iterator(iterator):
pass
- #cppclass const_reverse_iterator(reverse_iterator):
- # pass
+ cppclass const_reverse_iterator(reverse_iterator):
+ pass
deque() except +
deque(deque&) except +
deque(size_t) except +
@@ -58,3 +81,6 @@
void resize(size_t, T&)
size_t size()
void swap(deque&)
+
+ # C++11 methods
+ void shrink_to_fit()
diff -Nru cython-0.26.1/Cython/Includes/libcpp/forward_list.pxd cython-0.29.14/Cython/Includes/libcpp/forward_list.pxd
--- cython-0.26.1/Cython/Includes/libcpp/forward_list.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/Cython/Includes/libcpp/forward_list.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,62 @@
+cdef extern from "" namespace "std" nogil:
+ cdef cppclass forward_list[T,ALLOCATOR=*]:
+ ctypedef T value_type
+ ctypedef ALLOCATOR allocator_type
+
+ # these should really be allocator_type.size_type and
+ # allocator_type.difference_type to be true to the C++ definition
+ # but cython doesn't support deferred access on template arguments
+ ctypedef size_t size_type
+ ctypedef ptrdiff_t difference_type
+
+ cppclass iterator:
+ iterator()
+ iterator(iterator &)
+ T& operator*()
+ iterator operator++()
+ bint operator==(iterator)
+ bint operator!=(iterator)
+ cppclass const_iterator(iterator):
+ pass
+ forward_list() except +
+ forward_list(forward_list&) except +
+ forward_list(size_t, T&) except +
+ #forward_list& operator=(forward_list&)
+ bint operator==(forward_list&, forward_list&)
+ bint operator!=(forward_list&, forward_list&)
+ bint operator<(forward_list&, forward_list&)
+ bint operator>(forward_list&, forward_list&)
+ bint operator<=(forward_list&, forward_list&)
+ bint operator>=(forward_list&, forward_list&)
+ void assign(size_t, T&)
+ T& front()
+ iterator before_begin()
+ const_iterator const_before_begin "before_begin"()
+ iterator begin()
+ const_iterator const_begin "begin"()
+ iterator end()
+ const_iterator const_end "end"()
+ bint empty()
+ size_t max_size()
+ void clear()
+ iterator insert_after(iterator, T&)
+ void insert_after(iterator, size_t, T&)
+ iterator erase_after(iterator)
+ iterator erase_after(iterator, iterator)
+ void push_front(T&)
+ void pop_front()
+ void resize(size_t)
+ void resize(size_t, T&)
+ void swap(forward_list&)
+ void merge(forward_list&)
+ void merge[Compare](forward_list&, Compare)
+ void splice_after(iterator, forward_list&)
+ void splice_after(iterator, forward_list&, iterator)
+ void splice_after(iterator, forward_list&, iterator, iterator)
+ void remove(const T&)
+ void remove_if[Predicate](Predicate)
+ void reverse()
+ void unique()
+ void unique[Predicate](Predicate)
+ void sort()
+ void sort[Compare](Compare)
diff -Nru cython-0.26.1/Cython/Includes/libcpp/list.pxd cython-0.29.14/Cython/Includes/libcpp/list.pxd
--- cython-0.26.1/Cython/Includes/libcpp/list.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Includes/libcpp/list.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -2,6 +2,13 @@
cdef cppclass list[T,ALLOCATOR=*]:
ctypedef T value_type
ctypedef ALLOCATOR allocator_type
+
+ # these should really be allocator_type.size_type and
+ # allocator_type.difference_type to be true to the C++ definition
+ # but cython doesn't support deferred access on template arguments
+ ctypedef size_t size_type
+ ctypedef ptrdiff_t difference_type
+
cppclass iterator:
iterator()
iterator(iterator &)
diff -Nru cython-0.26.1/Cython/Includes/libcpp/map.pxd cython-0.29.14/Cython/Includes/libcpp/map.pxd
--- cython-0.26.1/Cython/Includes/libcpp/map.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Includes/libcpp/map.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -13,18 +13,14 @@
iterator operator--()
bint operator==(iterator)
bint operator!=(iterator)
- cppclass const_iterator:
- pair[const T, U]& operator*()
- const_iterator operator++()
- const_iterator operator--()
- bint operator==(const_iterator)
- bint operator!=(const_iterator)
cppclass reverse_iterator:
pair[T, U]& operator*()
iterator operator++()
iterator operator--()
bint operator==(reverse_iterator)
bint operator!=(reverse_iterator)
+ cppclass const_iterator(iterator):
+ pass
cppclass const_reverse_iterator(reverse_iterator):
pass
map() except +
@@ -39,6 +35,7 @@
bint operator<=(map&, map&)
bint operator>=(map&, map&)
U& at(const T&) except +
+ const U& const_at "at"(const T&) except +
iterator begin()
const_iterator const_begin "begin" ()
void clear()
diff -Nru cython-0.26.1/Cython/Includes/libcpp/memory.pxd cython-0.29.14/Cython/Includes/libcpp/memory.pxd
--- cython-0.26.1/Cython/Includes/libcpp/memory.pxd 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Includes/libcpp/memory.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -107,3 +107,9 @@
# Temporaries used for exception handling break generated code
unique_ptr[T] make_unique[T](...) # except +
+
+ # No checking on the compatibility of T and U.
+ cdef shared_ptr[T] static_pointer_cast[T, U](const shared_ptr[U]&)
+ cdef shared_ptr[T] dynamic_pointer_cast[T, U](const shared_ptr[U]&)
+ cdef shared_ptr[T] const_pointer_cast[T, U](const shared_ptr[U]&)
+ cdef shared_ptr[T] reinterpret_pointer_cast[T, U](const shared_ptr[U]&)
diff -Nru cython-0.26.1/Cython/Includes/libcpp/queue.pxd cython-0.29.14/Cython/Includes/libcpp/queue.pxd
--- cython-0.26.1/Cython/Includes/libcpp/queue.pxd 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Includes/libcpp/queue.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -9,6 +9,9 @@
void pop()
void push(T&)
size_t size()
+ # C++11 methods
+ void swap(queue&)
+
cdef cppclass priority_queue[T]:
priority_queue() except +
priority_queue(priority_queue&) except +
@@ -18,3 +21,5 @@
void push(T&)
size_t size()
T& top()
+ # C++11 methods
+ void swap(priority_queue&)
diff -Nru cython-0.26.1/Cython/Includes/libcpp/set.pxd cython-0.29.14/Cython/Includes/libcpp/set.pxd
--- cython-0.26.1/Cython/Includes/libcpp/set.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Includes/libcpp/set.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -38,14 +38,14 @@
const_iterator const_end "end"()
pair[iterator, iterator] equal_range(const T&)
#pair[const_iterator, const_iterator] equal_range(T&)
- void erase(iterator)
- void erase(iterator, iterator)
+ iterator erase(iterator)
+ iterator erase(iterator, iterator)
size_t erase(T&)
iterator find(T&)
const_iterator const_find "find"(T&)
pair[iterator, bint] insert(const T&) except +
iterator insert(iterator, const T&) except +
- #void insert(input_iterator, input_iterator)
+ void insert(iterator, iterator) except +
#key_compare key_comp()
iterator lower_bound(T&)
const_iterator const_lower_bound "lower_bound"(T&)
diff -Nru cython-0.26.1/Cython/Includes/libcpp/string.pxd cython-0.29.14/Cython/Includes/libcpp/string.pxd
--- cython-0.26.1/Cython/Includes/libcpp/string.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Includes/libcpp/string.pxd 2018-09-22 14:18:56.000000000 +0000
@@ -9,9 +9,9 @@
cdef cppclass string:
string() except +
- string(char *) except +
- string(char *, size_t) except +
- string(string&) except +
+ string(const char *) except +
+ string(const char *, size_t) except +
+ string(const string&) except +
# as a string formed by a repetition of character c, n times.
string(size_t, char) except +
@@ -63,65 +63,67 @@
char& at(size_t)
char& operator[](size_t)
- int compare(string&)
-
- string& append(string&)
- string& append(string&, size_t, size_t)
- string& append(char *)
- string& append(char *, size_t)
+ char& front() # C++11
+ char& back() # C++11
+ int compare(const string&)
+
+ string& append(const string&)
+ string& append(const string&, size_t, size_t)
+ string& append(const char *)
+ string& append(const char *, size_t)
string& append(size_t, char)
void push_back(char c)
- string& assign (string&)
- string& assign (string&, size_t, size_t)
- string& assign (char *, size_t)
- string& assign (char *)
+ string& assign (const string&)
+ string& assign (const string&, size_t, size_t)
+ string& assign (const char *, size_t)
+ string& assign (const char *)
string& assign (size_t n, char c)
- string& insert(size_t, string&)
- string& insert(size_t, string&, size_t, size_t)
- string& insert(size_t, char* s, size_t)
+ string& insert(size_t, const string&)
+ string& insert(size_t, const string&, size_t, size_t)
+ string& insert(size_t, const char* s, size_t)
- string& insert(size_t, char* s)
+ string& insert(size_t, const char* s)
string& insert(size_t, size_t, char c)
size_t copy(char *, size_t, size_t)
- size_t find(string&)
- size_t find(string&, size_t)
- size_t find(char*, size_t pos, size_t)
- size_t find(char*, size_t pos)
+ size_t find(const string&)
+ size_t find(const string&, size_t)
+ size_t find(const char*, size_t pos, size_t)
+ size_t find(const char*, size_t pos)
size_t find(char, size_t pos)
- size_t rfind(string&, size_t)
- size_t rfind(char* s, size_t, size_t)
- size_t rfind(char*, size_t pos)
+ size_t rfind(const string&, size_t)
+ size_t rfind(const char* s, size_t, size_t)
+ size_t rfind(const char*, size_t pos)
size_t rfind(char c, size_t)
size_t rfind(char c)
- size_t find_first_of(string&, size_t)
- size_t find_first_of(char* s, size_t, size_t)
- size_t find_first_of(char*, size_t pos)
+ size_t find_first_of(const string&, size_t)
+ size_t find_first_of(const char* s, size_t, size_t)
+ size_t find_first_of(const char*, size_t pos)
size_t find_first_of(char c, size_t)
size_t find_first_of(char c)
- size_t find_first_not_of(string&, size_t)
- size_t find_first_not_of(char* s, size_t, size_t)
- size_t find_first_not_of(char*, size_t pos)
+ size_t find_first_not_of(const string&, size_t)
+ size_t find_first_not_of(const char* s, size_t, size_t)
+ size_t find_first_not_of(const char*, size_t pos)
size_t find_first_not_of(char c, size_t)
size_t find_first_not_of(char c)
- size_t find_last_of(string&, size_t)
- size_t find_last_of(char* s, size_t, size_t)
- size_t find_last_of(char*, size_t pos)
+ size_t find_last_of(const string&, size_t)
+ size_t find_last_of(const char* s, size_t, size_t)
+ size_t find_last_of(const char*, size_t pos)
size_t find_last_of(char c, size_t)
size_t find_last_of(char c)
- size_t find_last_not_of(string&, size_t)
- size_t find_last_not_of(char* s, size_t, size_t)
- size_t find_last_not_of(char*, size_t pos)
+ size_t find_last_not_of(const string&, size_t)
+ size_t find_last_not_of(const char* s, size_t, size_t)
+ size_t find_last_not_of(const char*, size_t pos)
string substr(size_t, size_t)
string substr()
@@ -130,27 +132,27 @@
size_t find_last_not_of(char c, size_t)
size_t find_last_not_of(char c)
- #string& operator= (string&)
- #string& operator= (char*)
+ #string& operator= (const string&)
+ #string& operator= (const char*)
#string& operator= (char)
- string operator+ (string& rhs)
- string operator+ (char* rhs)
+ string operator+ (const string& rhs)
+ string operator+ (const char* rhs)
- bint operator==(string&)
- bint operator==(char*)
+ bint operator==(const string&)
+ bint operator==(const char*)
- bint operator!= (string& rhs )
- bint operator!= (char* )
+ bint operator!= (const string& rhs )
+ bint operator!= (const char* )
- bint operator< (string&)
- bint operator< (char*)
+ bint operator< (const string&)
+ bint operator< (const char*)
- bint operator> (string&)
- bint operator> (char*)
+ bint operator> (const string&)
+ bint operator> (const char*)
- bint operator<= (string&)
- bint operator<= (char*)
+ bint operator<= (const string&)
+ bint operator<= (const char*)
- bint operator>= (string&)
- bint operator>= (char*)
+ bint operator>= (const string&)
+ bint operator>= (const char*)
diff -Nru cython-0.26.1/Cython/Includes/libcpp/unordered_map.pxd cython-0.29.14/Cython/Includes/libcpp/unordered_map.pxd
--- cython-0.26.1/Cython/Includes/libcpp/unordered_map.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Includes/libcpp/unordered_map.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -32,7 +32,8 @@
bint operator>(unordered_map&, unordered_map&)
bint operator<=(unordered_map&, unordered_map&)
bint operator>=(unordered_map&, unordered_map&)
- U& at(T&)
+ U& at(const T&)
+ const U& const_at "at"(const T&)
iterator begin()
const_iterator const_begin "begin"()
void clear()
@@ -41,15 +42,15 @@
iterator end()
const_iterator const_end "end"()
pair[iterator, iterator] equal_range(T&)
- #pair[const_iterator, const_iterator] equal_range(key_type&)
- void erase(iterator)
- void erase(iterator, iterator)
+ pair[const_iterator, const_iterator] const_equal_range "equal_range"(const T&)
+ iterator erase(iterator)
+ iterator erase(iterator, iterator)
size_t erase(T&)
iterator find(T&)
const_iterator const_find "find"(T&)
pair[iterator, bint] insert(pair[T, U]) # XXX pair[T,U]&
iterator insert(iterator, pair[T, U]) # XXX pair[T,U]&
- #void insert(input_iterator, input_iterator)
+ iterator insert(iterator, iterator)
#key_compare key_comp()
iterator lower_bound(T&)
const_iterator const_lower_bound "lower_bound"(T&)
@@ -65,3 +66,9 @@
#value_compare value_comp()
void max_load_factor(float)
float max_load_factor()
+ void rehash(size_t)
+ void reserve(size_t)
+ size_t bucket_count()
+ size_t max_bucket_count()
+ size_t bucket_size(size_t)
+ size_t bucket(const T&)
diff -Nru cython-0.26.1/Cython/Includes/libcpp/unordered_set.pxd cython-0.29.14/Cython/Includes/libcpp/unordered_set.pxd
--- cython-0.26.1/Cython/Includes/libcpp/unordered_set.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Includes/libcpp/unordered_set.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -37,16 +37,16 @@
iterator end()
const_iterator const_end "end"()
pair[iterator, iterator] equal_range(T&)
- #pair[const_iterator, const_iterator] equal_range(T&)
- void erase(iterator)
- void erase(iterator, iterator)
+ pair[const_iterator, const_iterator] const_equal_range "equal_range"(T&)
+ iterator erase(iterator)
+ iterator erase(iterator, iterator)
size_t erase(T&)
iterator find(T&)
const_iterator const_find "find"(T&)
pair[iterator, bint] insert(T&)
iterator insert(iterator, T&)
- #void insert(input_iterator, input_iterator)
#key_compare key_comp()
+ iterator insert(iterator, iterator)
iterator lower_bound(T&)
const_iterator const_lower_bound "lower_bound"(T&)
size_t max_size()
@@ -59,3 +59,11 @@
iterator upper_bound(T&)
const_iterator const_upper_bound "upper_bound"(T&)
#value_compare value_comp()
+ void max_load_factor(float)
+ float max_load_factor()
+ void rehash(size_t)
+ void reserve(size_t)
+ size_t bucket_count()
+ size_t max_bucket_count()
+ size_t bucket_size(size_t)
+ size_t bucket(const T&)
diff -Nru cython-0.26.1/Cython/Includes/libcpp/vector.pxd cython-0.29.14/Cython/Includes/libcpp/vector.pxd
--- cython-0.26.1/Cython/Includes/libcpp/vector.pxd 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Includes/libcpp/vector.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -5,7 +5,7 @@
# these should really be allocator_type.size_type and
# allocator_type.difference_type to be true to the C++ definition
- # but cython doesn't support defered access on template arguments
+ # but cython doesn't support deferred access on template arguments
ctypedef size_t size_type
ctypedef ptrdiff_t difference_type
@@ -24,10 +24,11 @@
bint operator>=(iterator)
cppclass reverse_iterator:
T& operator*()
- iterator operator++()
- iterator operator--()
- iterator operator+(size_type)
- iterator operator-(size_type)
+ reverse_iterator operator++()
+ reverse_iterator operator--()
+ reverse_iterator operator+(size_type)
+ reverse_iterator operator-(size_type)
+ difference_type operator-(reverse_iterator)
bint operator==(reverse_iterator)
bint operator!=(reverse_iterator)
bint operator<(reverse_iterator)
@@ -83,4 +84,5 @@
# C++11 methods
T* data()
+ const T* const_data "data"()
void shrink_to_fit()
diff -Nru cython-0.26.1/Cython/Includes/numpy/__init__.pxd cython-0.29.14/Cython/Includes/numpy/__init__.pxd
--- cython-0.26.1/Cython/Includes/numpy/__init__.pxd 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Includes/numpy/__init__.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -17,10 +17,10 @@
DEF _buffer_format_string_len = 255
cimport cpython.buffer as pybuf
-from cpython.ref cimport Py_INCREF, Py_XDECREF
-from cpython.object cimport PyObject
+from cpython.ref cimport Py_INCREF
+from cpython.mem cimport PyObject_Malloc, PyObject_Free
+from cpython.object cimport PyObject, PyTypeObject
from cpython.type cimport type
-cimport libc.stdlib as stdlib
cimport libc.stdio as stdio
cdef extern from "Python.h":
@@ -52,6 +52,8 @@
NPY_STRING
NPY_UNICODE
NPY_VOID
+ NPY_DATETIME
+ NPY_TIMEDELTA
NPY_NTYPES
NPY_NOTYPE
@@ -88,6 +90,14 @@
NPY_ANYORDER
NPY_CORDER
NPY_FORTRANORDER
+ NPY_KEEPORDER
+
+ ctypedef enum NPY_CASTING:
+ NPY_NO_CASTING
+ NPY_EQUIV_CASTING
+ NPY_SAFE_CASTING
+ NPY_SAME_KIND_CASTING
+ NPY_UNSAFE_CASTING
ctypedef enum NPY_CLIPMODE:
NPY_CLIP
@@ -113,6 +123,7 @@
NPY_SEARCHRIGHT
enum:
+ # DEPRECATED since NumPy 1.7 ! Do not use in new code!
NPY_C_CONTIGUOUS
NPY_F_CONTIGUOUS
NPY_CONTIGUOUS
@@ -145,6 +156,37 @@
NPY_UPDATE_ALL
+ enum:
+ # Added in NumPy 1.7 to replace the deprecated enums above.
+ NPY_ARRAY_C_CONTIGUOUS
+ NPY_ARRAY_F_CONTIGUOUS
+ NPY_ARRAY_OWNDATA
+ NPY_ARRAY_FORCECAST
+ NPY_ARRAY_ENSURECOPY
+ NPY_ARRAY_ENSUREARRAY
+ NPY_ARRAY_ELEMENTSTRIDES
+ NPY_ARRAY_ALIGNED
+ NPY_ARRAY_NOTSWAPPED
+ NPY_ARRAY_WRITEABLE
+ NPY_ARRAY_UPDATEIFCOPY
+
+ NPY_ARRAY_BEHAVED
+ NPY_ARRAY_BEHAVED_NS
+ NPY_ARRAY_CARRAY
+ NPY_ARRAY_CARRAY_RO
+ NPY_ARRAY_FARRAY
+ NPY_ARRAY_FARRAY_RO
+ NPY_ARRAY_DEFAULT
+
+ NPY_ARRAY_IN_ARRAY
+ NPY_ARRAY_OUT_ARRAY
+ NPY_ARRAY_INOUT_ARRAY
+ NPY_ARRAY_IN_FARRAY
+ NPY_ARRAY_OUT_FARRAY
+ NPY_ARRAY_INOUT_FARRAY
+
+ NPY_ARRAY_UPDATE_ALL
+
cdef enum:
NPY_MAXDIMS
@@ -152,11 +194,26 @@
ctypedef void (*PyArray_VectorUnaryFunc)(void *, void *, npy_intp, void *, void *)
- ctypedef class numpy.dtype [object PyArray_Descr]:
+ ctypedef struct PyArray_ArrayDescr:
+ # shape is a tuple, but Cython doesn't support "tuple shape"
+ # inside a non-PyObject declaration, so we have to declare it
+ # as just a PyObject*.
+ PyObject* shape
+
+ ctypedef struct PyArray_Descr:
+ pass
+
+ ctypedef class numpy.dtype [object PyArray_Descr, check_size ignore]:
# Use PyDataType_* macros when possible, however there are no macros
# for accessing some of the fields, so some are defined.
+ cdef PyTypeObject* typeobj
cdef char kind
cdef char type
+ # Numpy sometimes mutates this without warning (e.g. it'll
+ # sometimes change "|" to "<" in shared dtype objects on
+ # little-endian machines). If this matters to you, use
+ # PyArray_IsNativeByteOrder(dtype.byteorder) instead of
+ # directly accessing this field.
cdef char byteorder
cdef char flags
cdef int type_num
@@ -164,6 +221,10 @@
cdef int alignment
cdef dict fields
cdef tuple names
+ # Use PyDataType_HASSUBARRAY to test whether this field is
+ # valid (the pointer can be NULL). Most users should access
+ # this field via the inline helper method PyDataType_SHAPE.
+ cdef PyArray_ArrayDescr* subarray
ctypedef extern class numpy.flatiter [object PyArrayIterObject]:
# Use through macros
@@ -178,7 +239,7 @@
# like PyArrayObject**.
pass
- ctypedef class numpy.ndarray [object PyArrayObject]:
+ ctypedef class numpy.ndarray [object PyArrayObject, check_size ignore]:
cdef __cythonbufferdefaults__ = {"mode": "strided"}
cdef:
@@ -188,7 +249,7 @@
int ndim "nd"
npy_intp *shape "dimensions"
npy_intp *strides
- dtype descr
+ dtype descr # deprecated since NumPy 1.7 !
PyObject* base
# Note: This syntax (function definition in pxd files) is an
@@ -196,37 +257,30 @@
# -- the details of this may change.
def __getbuffer__(ndarray self, Py_buffer* info, int flags):
# This implementation of getbuffer is geared towards Cython
- # requirements, and does not yet fullfill the PEP.
+ # requirements, and does not yet fulfill the PEP.
# In particular strided access is always provided regardless
# of flags
- if info == NULL: return
-
- cdef int copy_shape, i, ndim
+ cdef int i, ndim
cdef int endian_detector = 1
cdef bint little_endian = ((&endian_detector)[0] != 0)
ndim = PyArray_NDIM(self)
- if sizeof(npy_intp) != sizeof(Py_ssize_t):
- copy_shape = 1
- else:
- copy_shape = 0
-
if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)
- and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):
+ and not PyArray_CHKFLAGS(self, NPY_ARRAY_C_CONTIGUOUS)):
raise ValueError(u"ndarray is not C contiguous")
if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)
- and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):
+ and not PyArray_CHKFLAGS(self, NPY_ARRAY_F_CONTIGUOUS)):
raise ValueError(u"ndarray is not Fortran contiguous")
info.buf = PyArray_DATA(self)
info.ndim = ndim
- if copy_shape:
+ if sizeof(npy_intp) != sizeof(Py_ssize_t):
# Allocate new buffer for strides and shape info.
# This is allocated as one block, strides first.
- info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2)
+ info.strides = PyObject_Malloc(sizeof(Py_ssize_t) * 2 * ndim)
info.shape = info.strides + ndim
for i in range(ndim):
info.strides[i] = PyArray_STRIDES(self)[i]
@@ -240,19 +294,12 @@
cdef int t
cdef char* f = NULL
- cdef dtype descr = self.descr
+ cdef dtype descr = PyArray_DESCR(self)
cdef int offset
- cdef bint hasfields = PyDataType_HASFIELDS(descr)
+ info.obj = self
- if not hasfields and not copy_shape:
- # do not call releasebuffer
- info.obj = None
- else:
- # need to call releasebuffer
- info.obj = self
-
- if not hasfields:
+ if not PyDataType_HASFIELDS(descr):
t = descr.type_num
if ((descr.byteorder == c'>' and little_endian) or
(descr.byteorder == c'<' and not little_endian)):
@@ -279,7 +326,7 @@
info.format = f
return
else:
- info.format = stdlib.malloc(_buffer_format_string_len)
+ info.format = PyObject_Malloc(_buffer_format_string_len)
info.format[0] = c'^' # Native data types, manual alignment
offset = 0
f = _util_dtypestring(descr, info.format + 1,
@@ -289,9 +336,9 @@
def __releasebuffer__(ndarray self, Py_buffer* info):
if PyArray_HASFIELDS(self):
- stdlib.free(info.format)
+ PyObject_Free(info.format)
if sizeof(npy_intp) != sizeof(Py_ssize_t):
- stdlib.free(info.strides)
+ PyObject_Free(info.strides)
# info.shape was stored after info.strides in the same block
@@ -375,6 +422,8 @@
# Macros from ndarrayobject.h
#
bint PyArray_CHKFLAGS(ndarray m, int flags)
+ bint PyArray_IS_C_CONTIGUOUS(ndarray arr)
+ bint PyArray_IS_F_CONTIGUOUS(ndarray arr)
bint PyArray_ISCONTIGUOUS(ndarray m)
bint PyArray_ISWRITEABLE(ndarray m)
bint PyArray_ISALIGNED(ndarray m)
@@ -391,8 +440,8 @@
npy_intp PyArray_DIM(ndarray, size_t)
npy_intp PyArray_STRIDE(ndarray, size_t)
- # object PyArray_BASE(ndarray) wrong refcount semantics
- # dtype PyArray_DESCR(ndarray) wrong refcount semantics
+ PyObject *PyArray_BASE(ndarray) # returns borrowed reference!
+ PyArray_Descr *PyArray_DESCR(ndarray) # returns borrowed reference to dtype!
int PyArray_FLAGS(ndarray)
npy_intp PyArray_ITEMSIZE(ndarray)
int PyArray_TYPE(ndarray arr)
@@ -428,6 +477,7 @@
bint PyDataType_ISEXTENDED(dtype)
bint PyDataType_ISOBJECT(dtype)
bint PyDataType_HASFIELDS(dtype)
+ bint PyDataType_HASSUBARRAY(dtype)
bint PyArray_ISBOOL(ndarray)
bint PyArray_ISUNSIGNED(ndarray)
@@ -714,6 +764,7 @@
object PyArray_CheckAxis (ndarray, int *, int)
npy_intp PyArray_OverflowMultiplyList (npy_intp *, int)
int PyArray_CompareString (char *, char *, size_t)
+ int PyArray_SetBaseObject(ndarray, base) # NOTE: steals a reference to base! Use "set_array_base()" instead.
# Typedefs that matches the runtime dtype objects in
@@ -782,6 +833,12 @@
cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):
return PyArray_MultiIterNew(5, a, b, c, d, e)
+cdef inline tuple PyDataType_SHAPE(dtype d):
+ if PyDataType_HASSUBARRAY(d):
+ return d.subarray.shape
+ else:
+ return ()
+
cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL:
# Recursive utility function used in __getbuffer__ to get format
# string. The new location in the format string is returned.
@@ -962,23 +1019,15 @@
int _import_umath() except -1
-
cdef inline void set_array_base(ndarray arr, object base):
- cdef PyObject* baseptr
- if base is None:
- baseptr = NULL
- else:
- Py_INCREF(base) # important to do this before decref below!
- baseptr = base
- Py_XDECREF(arr.base)
- arr.base = baseptr
+ Py_INCREF(base) # important to do this before stealing the reference below!
+ PyArray_SetBaseObject(arr, base)
cdef inline object get_array_base(ndarray arr):
- if arr.base is NULL:
+ base = PyArray_BASE(arr)
+ if base is NULL:
return None
- else:
- return arr.base
-
+ return base
# Versions of the import_* functions which are more suitable for
# Cython code.
diff -Nru cython-0.26.1/Cython/Includes/posix/mman.pxd cython-0.29.14/Cython/Includes/posix/mman.pxd
--- cython-0.26.1/Cython/Includes/posix/mman.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Includes/posix/mman.pxd 2019-07-26 12:09:39.000000000 +0000
@@ -24,6 +24,8 @@
enum: MAP_NOCORE # Typically available only on BSD
enum: MAP_NOSYNC
+ void *MAP_FAILED
+
void *mmap(void *addr, size_t Len, int prot, int flags, int fd, off_t off)
int munmap(void *addr, size_t Len)
int mprotect(void *addr, size_t Len, int prot)
@@ -46,17 +48,34 @@
int munlock(const void *addr, size_t Len)
int mlockall(int flags)
int munlockall()
+ # Linux-specific
+ enum: MLOCK_ONFAULT
+ enum: MCL_ONFAULT
+ int mlock2(const void *addr, size_t len, int flags)
int shm_open(const char *name, int oflag, mode_t mode)
int shm_unlink(const char *name)
# often available
- enum: MADV_REMOVE # pre-POSIX advice flags; often available
+ enum: MADV_NORMAL # pre-POSIX advice flags; should translate 1-1 to POSIX_*
+ enum: MADV_RANDOM # but in practice it is not always the same.
+ enum: MADV_SEQUENTIAL
+ enum: MADV_WILLNEED
+ enum: MADV_DONTNEED
+ enum: MADV_REMOVE # other pre-POSIX advice flags; often available
enum: MADV_DONTFORK
enum: MADV_DOFORK
enum: MADV_HWPOISON
enum: MADV_MERGEABLE,
enum: MADV_UNMERGEABLE
+ enum: MADV_SOFT_OFFLINE
+ enum: MADV_HUGEPAGE
+ enum: MADV_NOHUGEPAGE
+ enum: MADV_DONTDUMP
+ enum: MADV_DODUMP
+ enum: MADV_FREE
+ enum: MADV_WIPEONFORK
+ enum: MADV_KEEPONFORK
int madvise(void *addr, size_t Len, int advice)
# sometimes available
diff -Nru cython-0.26.1/Cython/Includes/posix/signal.pxd cython-0.29.14/Cython/Includes/posix/signal.pxd
--- cython-0.26.1/Cython/Includes/posix/signal.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Includes/posix/signal.pxd 2018-12-14 14:27:50.000000000 +0000
@@ -31,6 +31,11 @@
sigset_t sa_mask
int sa_flags
+ ctypedef struct stack_t:
+ void *ss_sp
+ int ss_flags
+ size_t ss_size
+
enum: SA_NOCLDSTOP
enum: SIG_BLOCK
enum: SIG_UNBLOCK
@@ -63,4 +68,6 @@
int sigdelset (sigset_t *, int)
int sigemptyset (sigset_t *)
int sigfillset (sigset_t *)
- int sigismember (const sigset_t *)
+ int sigismember (const sigset_t *, int)
+
+ int sigaltstack(const stack_t *, stack_t *)
diff -Nru cython-0.26.1/Cython/Includes/posix/stat.pxd cython-0.29.14/Cython/Includes/posix/stat.pxd
--- cython-0.26.1/Cython/Includes/posix/stat.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Includes/posix/stat.pxd 2018-09-22 14:18:56.000000000 +0000
@@ -18,6 +18,11 @@
time_t st_mtime
time_t st_ctime
+ # st_birthtime exists on *BSD and OS X.
+ # Under Linux, defining it here does not hurt. Compilation under Linux
+ # will only (and rightfully) fail when attempting to use the field.
+ time_t st_birthtime
+
# POSIX prescribes including both and for these
cdef extern from "" nogil:
int fchmod(int, mode_t)
diff -Nru cython-0.26.1/Cython/Includes/posix/time.pxd cython-0.29.14/Cython/Includes/posix/time.pxd
--- cython-0.26.1/Cython/Includes/posix/time.pxd 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Includes/posix/time.pxd 2018-09-22 14:18:56.000000000 +0000
@@ -4,9 +4,6 @@
from posix.signal cimport sigevent
cdef extern from "" nogil:
- enum: CLOCK_PROCESS_CPUTIME_ID
- enum: CLOCK_THREAD_CPUTIME_ID
-
enum: CLOCK_REALTIME
enum: TIMER_ABSTIME
enum: CLOCK_MONOTONIC
diff -Nru cython-0.26.1/Cython/Parser/Grammar cython-0.29.14/Cython/Parser/Grammar
--- cython-0.26.1/Cython/Parser/Grammar 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Parser/Grammar 2018-09-22 14:18:56.000000000 +0000
@@ -127,7 +127,7 @@
# to our LL(1) parser. Even though 'test' includes '*expr' in star_expr,
# we explicitly match '*' here, too, to give it proper precedence.
# Illegal combinations and orderings are blocked in ast.c:
-# multiple (test comp_for) arguements are blocked; keyword unpackings
+# multiple (test comp_for) arguments are blocked; keyword unpackings
# that precede iterable unpackings are blocked; etc.
argument: ( test [comp_for] |
test '=' test |
diff -Nru cython-0.26.1/Cython/Plex/Actions.py cython-0.29.14/Cython/Plex/Actions.py
--- cython-0.26.1/Cython/Plex/Actions.py 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Plex/Actions.py 2018-11-24 09:20:06.000000000 +0000
@@ -1,3 +1,4 @@
+# cython: auto_pickle=False
#=======================================================================
#
# Python Lexical Analyser
diff -Nru cython-0.26.1/Cython/Plex/Scanners.pxd cython-0.29.14/Cython/Plex/Scanners.pxd
--- cython-0.26.1/Cython/Plex/Scanners.pxd 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Plex/Scanners.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -28,18 +28,23 @@
cdef public level
+ @cython.final
@cython.locals(input_state=long)
cdef next_char(self)
@cython.locals(action=Action)
cpdef tuple read(self)
+ @cython.final
cdef tuple scan_a_token(self)
- cdef tuple position(self)
+ ##cdef tuple position(self) # used frequently by Parsing.py
- @cython.locals(cur_pos=long, cur_line=long, cur_line_start=long,
- input_state=long, next_pos=long, state=dict,
- buf_start_pos=long, buf_len=long, buf_index=long,
- trace=bint, discard=long, data=unicode, buffer=unicode)
+ @cython.final
+ @cython.locals(cur_pos=Py_ssize_t, cur_line=Py_ssize_t, cur_line_start=Py_ssize_t,
+ input_state=long, next_pos=Py_ssize_t, state=dict,
+ buf_start_pos=Py_ssize_t, buf_len=Py_ssize_t, buf_index=Py_ssize_t,
+ trace=bint, discard=Py_ssize_t, data=unicode, buffer=unicode)
cdef run_machine_inlined(self)
+ @cython.final
cdef begin(self, state)
+ @cython.final
cdef produce(self, value, text = *)
diff -Nru cython-0.26.1/Cython/Plex/Scanners.py cython-0.29.14/Cython/Plex/Scanners.py
--- cython-0.26.1/Cython/Plex/Scanners.py 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Plex/Scanners.py 2018-11-24 09:20:06.000000000 +0000
@@ -1,3 +1,4 @@
+# cython: auto_pickle=False
#=======================================================================
#
# Python Lexical Analyser
@@ -291,7 +292,7 @@
else: # input_state = 5
self.cur_char = u''
if self.trace:
- print("--> [%d] %d %s" % (input_state, self.cur_pos, repr(self.cur_char)))
+ print("--> [%d] %d %r" % (input_state, self.cur_pos, self.cur_char))
def position(self):
"""
diff -Nru cython-0.26.1/Cython/Runtime/refnanny.pyx cython-0.29.14/Cython/Runtime/refnanny.pyx
--- cython-0.26.1/Cython/Runtime/refnanny.pyx 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Runtime/refnanny.pyx 2018-09-22 14:18:56.000000000 +0000
@@ -1,4 +1,4 @@
-# cython: language_level=3
+# cython: language_level=3, auto_pickle=False
from cpython.ref cimport PyObject, Py_INCREF, Py_DECREF, Py_XDECREF, Py_XINCREF
from cpython.exc cimport PyErr_Fetch, PyErr_Restore
diff -Nru cython-0.26.1/Cython/Shadow.py cython-0.29.14/Cython/Shadow.py
--- cython-0.26.1/Cython/Shadow.py 2017-08-29 06:15:21.000000000 +0000
+++ cython-0.29.14/Cython/Shadow.py 2019-11-01 14:13:39.000000000 +0000
@@ -1,7 +1,7 @@
# cython.* namespace for pure mode.
from __future__ import absolute_import
-__version__ = "0.26.1"
+__version__ = "0.29.14"
try:
from __builtin__ import basestring
@@ -108,11 +108,14 @@
cclass = ccall = cfunc = _EmptyDecoratorAndManager()
returns = wraparound = boundscheck = initializedcheck = nonecheck = \
- overflowcheck = embedsignature = cdivision = cdivision_warnings = \
- always_allows_keywords = profile = linetrace = infer_type = \
+ embedsignature = cdivision = cdivision_warnings = \
+ always_allows_keywords = profile = linetrace = infer_types = \
unraisable_tracebacks = freelist = \
- lambda arg: _EmptyDecoratorAndManager()
+ lambda _: _EmptyDecoratorAndManager()
+exceptval = lambda _=None, check=True: _EmptyDecoratorAndManager()
+
+overflowcheck = lambda _: _EmptyDecoratorAndManager()
optimization = _Optimization()
overflowcheck.fold = optimization.use_switch = \
@@ -183,8 +186,15 @@
return value
class _nogil(object):
- """Support for 'with nogil' statement
+ """Support for 'with nogil' statement and @nogil decorator.
"""
+ def __call__(self, x):
+ if callable(x):
+ # Used as function decorator => return the function unchanged.
+ return x
+ # Used as conditional context manager or to create an "@nogil(True/False)" decorator => keep going.
+ return self
+
def __enter__(self):
pass
def __exit__(self, exc_class, exc, tb):
@@ -194,6 +204,7 @@
gil = _nogil()
del _nogil
+
# Emulated types
class CythonMetaType(type):
@@ -383,7 +394,7 @@
int_types = ['char', 'short', 'Py_UNICODE', 'int', 'Py_UCS4', 'long', 'longlong', 'Py_ssize_t', 'size_t']
float_types = ['longdouble', 'double', 'float']
complex_types = ['longdoublecomplex', 'doublecomplex', 'floatcomplex', 'complex']
-other_types = ['bint', 'void']
+other_types = ['bint', 'void', 'Py_tss_t']
to_repr = {
'longlong': 'long long',
@@ -418,14 +429,17 @@
gs[name] = typedef(py_complex, to_repr(name, name))
bint = typedef(bool, "bint")
-void = typedef(int, "void")
+void = typedef(None, "void")
+Py_tss_t = typedef(None, "Py_tss_t")
for t in int_types + float_types + complex_types + other_types:
for i in range(1, 4):
- gs["%s_%s" % ('p'*i, t)] = globals()[t]._pointer(i)
+ gs["%s_%s" % ('p'*i, t)] = gs[t]._pointer(i)
-void = typedef(None, "void")
-NULL = p_void(0)
+NULL = gs['p_void'](0)
+
+# looks like 'gs' has some users out there by now...
+#del gs
integral = floating = numeric = _FusedType()
@@ -441,7 +455,7 @@
def parallel(self, num_threads=None):
return nogil
- def prange(self, start=0, stop=None, step=1, schedule=None, nogil=False):
+ def prange(self, start=0, stop=None, step=1, nogil=False, schedule=None, chunksize=None, num_threads=None):
if stop is None:
stop = start
start = 0
diff -Nru cython-0.26.1/Cython/StringIOTree.pxd cython-0.29.14/Cython/StringIOTree.pxd
--- cython-0.26.1/Cython/StringIOTree.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/Cython/StringIOTree.pxd 2018-09-22 14:18:56.000000000 +0000
@@ -0,0 +1,17 @@
+cimport cython
+
+cdef class StringIOTree:
+ cdef public list prepended_children
+ cdef public object stream
+ cdef public object write
+ cdef public list markers
+
+ @cython.locals(x=StringIOTree)
+ cpdef getvalue(self)
+ @cython.locals(child=StringIOTree)
+ cpdef copyto(self, target)
+ cpdef commit(self)
+ #def insert(self, iotree)
+ #def insertion_point(self)
+ @cython.locals(c=StringIOTree)
+ cpdef allmarkers(self)
diff -Nru cython-0.26.1/Cython/StringIOTree.py cython-0.29.14/Cython/StringIOTree.py
--- cython-0.26.1/Cython/StringIOTree.py 2015-09-10 16:25:36.000000000 +0000
+++ cython-0.29.14/Cython/StringIOTree.py 2018-11-24 09:20:06.000000000 +0000
@@ -1,7 +1,44 @@
+# cython: auto_pickle=False
+
+r"""
+Implements a buffer with insertion points. When you know you need to
+"get back" to a place and write more later, simply call insertion_point()
+at that spot and get a new StringIOTree object that is "left behind".
+
+EXAMPLE:
+
+>>> a = StringIOTree()
+>>> _= a.write('first\n')
+>>> b = a.insertion_point()
+>>> _= a.write('third\n')
+>>> _= b.write('second\n')
+>>> a.getvalue().split()
+['first', 'second', 'third']
+
+>>> c = b.insertion_point()
+>>> d = c.insertion_point()
+>>> _= d.write('alpha\n')
+>>> _= b.write('gamma\n')
+>>> _= c.write('beta\n')
+>>> b.getvalue().split()
+['second', 'alpha', 'beta', 'gamma']
+
+>>> i = StringIOTree()
+>>> d.insert(i)
+>>> _= i.write('inserted\n')
+>>> out = StringIO()
+>>> a.copyto(out)
+>>> out.getvalue().split()
+['first', 'second', 'alpha', 'inserted', 'beta', 'gamma', 'third']
+"""
+
+from __future__ import absolute_import #, unicode_literals
+
try:
+ # Prefer cStringIO since io.StringIO() does not support writing 'str' in Py2.
from cStringIO import StringIO
except ImportError:
- from io import StringIO # does not support writing 'str' in Py2
+ from io import StringIO
class StringIOTree(object):
@@ -69,35 +106,3 @@
def allmarkers(self):
children = self.prepended_children
return [m for c in children for m in c.allmarkers()] + self.markers
-
-
-__doc__ = r"""
-Implements a buffer with insertion points. When you know you need to
-"get back" to a place and write more later, simply call insertion_point()
-at that spot and get a new StringIOTree object that is "left behind".
-
-EXAMPLE:
-
->>> a = StringIOTree()
->>> _= a.write('first\n')
->>> b = a.insertion_point()
->>> _= a.write('third\n')
->>> _= b.write('second\n')
->>> a.getvalue().split()
-['first', 'second', 'third']
-
->>> c = b.insertion_point()
->>> d = c.insertion_point()
->>> _= d.write('alpha\n')
->>> _= b.write('gamma\n')
->>> _= c.write('beta\n')
->>> b.getvalue().split()
-['second', 'alpha', 'beta', 'gamma']
->>> i = StringIOTree()
->>> d.insert(i)
->>> _= i.write('inserted\n')
->>> out = StringIO()
->>> a.copyto(out)
->>> out.getvalue().split()
-['first', 'second', 'alpha', 'inserted', 'beta', 'gamma', 'third']
-"""
diff -Nru cython-0.26.1/Cython/Tests/TestCodeWriter.py cython-0.29.14/Cython/Tests/TestCodeWriter.py
--- cython-0.26.1/Cython/Tests/TestCodeWriter.py 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Tests/TestCodeWriter.py 2018-09-22 14:18:56.000000000 +0000
@@ -4,7 +4,7 @@
# CythonTest uses the CodeWriter heavily, so do some checking by
# roundtripping Cython code through the test framework.
- # Note that this test is dependant upon the normal Cython parser
+ # Note that this test is dependent upon the normal Cython parser
# to generate the input trees to the CodeWriter. This save *a lot*
# of time; better to spend that time writing other tests than perfecting
# this one...
diff -Nru cython-0.26.1/Cython/Tests/TestCythonUtils.py cython-0.29.14/Cython/Tests/TestCythonUtils.py
--- cython-0.26.1/Cython/Tests/TestCythonUtils.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/Cython/Tests/TestCythonUtils.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,11 @@
+import unittest
+
+from ..Utils import build_hex_version
+
+class TestCythonUtils(unittest.TestCase):
+ def test_build_hex_version(self):
+ self.assertEqual('0x001D00A1', build_hex_version('0.29a1'))
+ self.assertEqual('0x001D00A1', build_hex_version('0.29a1'))
+ self.assertEqual('0x001D03C4', build_hex_version('0.29.3rc4'))
+ self.assertEqual('0x001D00F0', build_hex_version('0.29'))
+ self.assertEqual('0x040000F0', build_hex_version('4.0'))
diff -Nru cython-0.26.1/Cython/Utility/AsyncGen.c cython-0.29.14/Cython/Utility/AsyncGen.c
--- cython-0.26.1/Cython/Utility/AsyncGen.c 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/Cython/Utility/AsyncGen.c 2019-11-01 14:13:39.000000000 +0000
@@ -0,0 +1,1112 @@
+// This is copied from genobject.c in CPython 3.6.
+// Try to keep it in sync by doing this from time to time:
+// sed -e 's|__pyx_||ig' Cython/Utility/AsyncGen.c | diff -udw - cpython/Objects/genobject.c | less
+
+//////////////////// AsyncGenerator.proto ////////////////////
+//@requires: Coroutine.c::Coroutine
+
+#define __Pyx_AsyncGen_USED
+typedef struct {
+ __pyx_CoroutineObject coro;
+ PyObject *ag_finalizer;
+ int ag_hooks_inited;
+ int ag_closed;
+} __pyx_PyAsyncGenObject;
+
+static PyTypeObject *__pyx__PyAsyncGenWrappedValueType = 0;
+static PyTypeObject *__pyx__PyAsyncGenASendType = 0;
+static PyTypeObject *__pyx__PyAsyncGenAThrowType = 0;
+static PyTypeObject *__pyx_AsyncGenType = 0;
+
+#define __Pyx_AsyncGen_CheckExact(obj) (Py_TYPE(obj) == __pyx_AsyncGenType)
+#define __pyx_PyAsyncGenASend_CheckExact(o) \
+ (Py_TYPE(o) == __pyx__PyAsyncGenASendType)
+#define __pyx_PyAsyncGenAThrow_CheckExact(o) \
+ (Py_TYPE(o) == __pyx__PyAsyncGenAThrowType)
+
+static PyObject *__Pyx_async_gen_anext(PyObject *o);
+static CYTHON_INLINE PyObject *__Pyx_async_gen_asend_iternext(PyObject *o);
+static PyObject *__Pyx_async_gen_asend_send(PyObject *o, PyObject *arg);
+static PyObject *__Pyx_async_gen_asend_close(PyObject *o, PyObject *args);
+static PyObject *__Pyx_async_gen_athrow_close(PyObject *o, PyObject *args);
+
+static PyObject *__Pyx__PyAsyncGenValueWrapperNew(PyObject *val);
+
+
+static __pyx_CoroutineObject *__Pyx_AsyncGen_New(
+ __pyx_coroutine_body_t body, PyObject *code, PyObject *closure,
+ PyObject *name, PyObject *qualname, PyObject *module_name) {
+ __pyx_PyAsyncGenObject *gen = PyObject_GC_New(__pyx_PyAsyncGenObject, __pyx_AsyncGenType);
+ if (unlikely(!gen))
+ return NULL;
+ gen->ag_finalizer = NULL;
+ gen->ag_closed = 0;
+ gen->ag_hooks_inited = 0;
+ return __Pyx__Coroutine_NewInit((__pyx_CoroutineObject*)gen, body, code, closure, name, qualname, module_name);
+}
+
+static int __pyx_AsyncGen_init(void);
+static void __Pyx_PyAsyncGen_Fini(void);
+
+//////////////////// AsyncGenerator.cleanup ////////////////////
+
+__Pyx_PyAsyncGen_Fini();
+
+//////////////////// AsyncGeneratorInitFinalizer ////////////////////
+
+// this is separated out because it needs more adaptation
+
+#if PY_VERSION_HEX < 0x030600B0
+static int __Pyx_async_gen_init_hooks(__pyx_PyAsyncGenObject *o) {
+#if 0
+ // TODO: implement finalizer support in older Python versions
+ PyThreadState *tstate;
+ PyObject *finalizer;
+ PyObject *firstiter;
+#endif
+
+ if (likely(o->ag_hooks_inited)) {
+ return 0;
+ }
+
+ o->ag_hooks_inited = 1;
+
+#if 0
+ tstate = __Pyx_PyThreadState_Current;
+
+ finalizer = tstate->async_gen_finalizer;
+ if (finalizer) {
+ Py_INCREF(finalizer);
+ o->ag_finalizer = finalizer;
+ }
+
+ firstiter = tstate->async_gen_firstiter;
+ if (firstiter) {
+ PyObject *res;
+
+ Py_INCREF(firstiter);
+ res = __Pyx_PyObject_CallOneArg(firstiter, (PyObject*)o);
+ Py_DECREF(firstiter);
+ if (res == NULL) {
+ return 1;
+ }
+ Py_DECREF(res);
+ }
+#endif
+
+ return 0;
+}
+#endif
+
+
+//////////////////// AsyncGenerator ////////////////////
+//@requires: AsyncGeneratorInitFinalizer
+//@requires: Coroutine.c::Coroutine
+//@requires: Coroutine.c::ReturnWithStopIteration
+//@requires: ObjectHandling.c::PyObjectCall2Args
+//@requires: ObjectHandling.c::PyObject_GenericGetAttrNoDict
+
+PyDoc_STRVAR(__Pyx_async_gen_send_doc,
+"send(arg) -> send 'arg' into generator,\n\
+return next yielded value or raise StopIteration.");
+
+PyDoc_STRVAR(__Pyx_async_gen_close_doc,
+"close() -> raise GeneratorExit inside generator.");
+
+PyDoc_STRVAR(__Pyx_async_gen_throw_doc,
+"throw(typ[,val[,tb]]) -> raise exception in generator,\n\
+return next yielded value or raise StopIteration.");
+
+PyDoc_STRVAR(__Pyx_async_gen_await_doc,
+"__await__() -> return a representation that can be passed into the 'await' expression.");
+
+// COPY STARTS HERE:
+
+static PyObject *__Pyx_async_gen_asend_new(__pyx_PyAsyncGenObject *, PyObject *);
+static PyObject *__Pyx_async_gen_athrow_new(__pyx_PyAsyncGenObject *, PyObject *);
+
+static const char *__Pyx_NON_INIT_CORO_MSG = "can't send non-None value to a just-started coroutine";
+static const char *__Pyx_ASYNC_GEN_IGNORED_EXIT_MSG = "async generator ignored GeneratorExit";
+
+typedef enum {
+ __PYX_AWAITABLE_STATE_INIT, /* new awaitable, has not yet been iterated */
+ __PYX_AWAITABLE_STATE_ITER, /* being iterated */
+ __PYX_AWAITABLE_STATE_CLOSED, /* closed */
+} __pyx_AwaitableState;
+
+typedef struct {
+ PyObject_HEAD
+ __pyx_PyAsyncGenObject *ags_gen;
+
+ /* Can be NULL, when in the __anext__() mode (equivalent of "asend(None)") */
+ PyObject *ags_sendval;
+
+ __pyx_AwaitableState ags_state;
+} __pyx_PyAsyncGenASend;
+
+
+typedef struct {
+ PyObject_HEAD
+ __pyx_PyAsyncGenObject *agt_gen;
+
+ /* Can be NULL, when in the "aclose()" mode (equivalent of "athrow(GeneratorExit)") */
+ PyObject *agt_args;
+
+ __pyx_AwaitableState agt_state;
+} __pyx_PyAsyncGenAThrow;
+
+
+typedef struct {
+ PyObject_HEAD
+ PyObject *agw_val;
+} __pyx__PyAsyncGenWrappedValue;
+
+
+#ifndef _PyAsyncGen_MAXFREELIST
+#define _PyAsyncGen_MAXFREELIST 80
+#endif
+
+// Freelists boost performance 6-10%; they also reduce memory
+// fragmentation, as _PyAsyncGenWrappedValue and PyAsyncGenASend
+// are short-living objects that are instantiated for every
+// __anext__ call.
+
+static __pyx__PyAsyncGenWrappedValue *__Pyx_ag_value_freelist[_PyAsyncGen_MAXFREELIST];
+static int __Pyx_ag_value_freelist_free = 0;
+
+static __pyx_PyAsyncGenASend *__Pyx_ag_asend_freelist[_PyAsyncGen_MAXFREELIST];
+static int __Pyx_ag_asend_freelist_free = 0;
+
+#define __pyx__PyAsyncGenWrappedValue_CheckExact(o) \
+ (Py_TYPE(o) == __pyx__PyAsyncGenWrappedValueType)
+
+
+static int
+__Pyx_async_gen_traverse(__pyx_PyAsyncGenObject *gen, visitproc visit, void *arg)
+{
+ Py_VISIT(gen->ag_finalizer);
+ return __Pyx_Coroutine_traverse((__pyx_CoroutineObject*)gen, visit, arg);
+}
+
+
+static PyObject *
+__Pyx_async_gen_repr(__pyx_CoroutineObject *o)
+{
+ // avoid NULL pointer dereference for qualname during garbage collection
+ return PyUnicode_FromFormat("",
+ o->gi_qualname ? o->gi_qualname : Py_None, o);
+}
+
+
+#if PY_VERSION_HEX >= 0x030600B0
+static int
+__Pyx_async_gen_init_hooks(__pyx_PyAsyncGenObject *o)
+{
+ PyThreadState *tstate;
+ PyObject *finalizer;
+ PyObject *firstiter;
+
+ if (o->ag_hooks_inited) {
+ return 0;
+ }
+
+ o->ag_hooks_inited = 1;
+
+ tstate = __Pyx_PyThreadState_Current;
+
+ finalizer = tstate->async_gen_finalizer;
+ if (finalizer) {
+ Py_INCREF(finalizer);
+ o->ag_finalizer = finalizer;
+ }
+
+ firstiter = tstate->async_gen_firstiter;
+ if (firstiter) {
+ PyObject *res;
+#if CYTHON_UNPACK_METHODS
+ PyObject *self;
+#endif
+
+ Py_INCREF(firstiter);
+ // at least asyncio stores methods here => optimise the call
+#if CYTHON_UNPACK_METHODS
+ if (likely(PyMethod_Check(firstiter)) && likely((self = PyMethod_GET_SELF(firstiter)) != NULL)) {
+ PyObject *function = PyMethod_GET_FUNCTION(firstiter);
+ res = __Pyx_PyObject_Call2Args(function, self, (PyObject*)o);
+ } else
+#endif
+ res = __Pyx_PyObject_CallOneArg(firstiter, (PyObject*)o);
+
+ Py_DECREF(firstiter);
+ if (unlikely(res == NULL)) {
+ return 1;
+ }
+ Py_DECREF(res);
+ }
+
+ return 0;
+}
+#endif
+
+
+static PyObject *
+__Pyx_async_gen_anext(PyObject *g)
+{
+ __pyx_PyAsyncGenObject *o = (__pyx_PyAsyncGenObject*) g;
+ if (__Pyx_async_gen_init_hooks(o)) {
+ return NULL;
+ }
+ return __Pyx_async_gen_asend_new(o, NULL);
+}
+
+static PyObject *
+__Pyx_async_gen_anext_method(PyObject *g, CYTHON_UNUSED PyObject *arg) {
+ return __Pyx_async_gen_anext(g);
+}
+
+
+static PyObject *
+__Pyx_async_gen_asend(__pyx_PyAsyncGenObject *o, PyObject *arg)
+{
+ if (__Pyx_async_gen_init_hooks(o)) {
+ return NULL;
+ }
+ return __Pyx_async_gen_asend_new(o, arg);
+}
+
+
+static PyObject *
+__Pyx_async_gen_aclose(__pyx_PyAsyncGenObject *o, CYTHON_UNUSED PyObject *arg)
+{
+ if (__Pyx_async_gen_init_hooks(o)) {
+ return NULL;
+ }
+ return __Pyx_async_gen_athrow_new(o, NULL);
+}
+
+
+static PyObject *
+__Pyx_async_gen_athrow(__pyx_PyAsyncGenObject *o, PyObject *args)
+{
+ if (__Pyx_async_gen_init_hooks(o)) {
+ return NULL;
+ }
+ return __Pyx_async_gen_athrow_new(o, args);
+}
+
+
+static PyObject *
+__Pyx_async_gen_self_method(PyObject *g, CYTHON_UNUSED PyObject *arg) {
+ return __Pyx_NewRef(g);
+}
+
+
+static PyGetSetDef __Pyx_async_gen_getsetlist[] = {
+ {(char*) "__name__", (getter)__Pyx_Coroutine_get_name, (setter)__Pyx_Coroutine_set_name,
+ (char*) PyDoc_STR("name of the async generator"), 0},
+ {(char*) "__qualname__", (getter)__Pyx_Coroutine_get_qualname, (setter)__Pyx_Coroutine_set_qualname,
+ (char*) PyDoc_STR("qualified name of the async generator"), 0},
+ //REMOVED: {(char*) "ag_await", (getter)coro_get_cr_await, NULL,
+ //REMOVED: (char*) PyDoc_STR("object being awaited on, or None")},
+ {0, 0, 0, 0, 0} /* Sentinel */
+};
+
+static PyMemberDef __Pyx_async_gen_memberlist[] = {
+ //REMOVED: {(char*) "ag_frame", T_OBJECT, offsetof(__pyx_PyAsyncGenObject, ag_frame), READONLY},
+ {(char*) "ag_running", T_BOOL, offsetof(__pyx_CoroutineObject, is_running), READONLY, NULL},
+ //REMOVED: {(char*) "ag_code", T_OBJECT, offsetof(__pyx_PyAsyncGenObject, ag_code), READONLY},
+ //ADDED: "ag_await"
+ {(char*) "ag_await", T_OBJECT, offsetof(__pyx_CoroutineObject, yieldfrom), READONLY,
+ (char*) PyDoc_STR("object being awaited on, or None")},
+ {0, 0, 0, 0, 0} /* Sentinel */
+};
+
+PyDoc_STRVAR(__Pyx_async_aclose_doc,
+"aclose() -> raise GeneratorExit inside generator.");
+
+PyDoc_STRVAR(__Pyx_async_asend_doc,
+"asend(v) -> send 'v' in generator.");
+
+PyDoc_STRVAR(__Pyx_async_athrow_doc,
+"athrow(typ[,val[,tb]]) -> raise exception in generator.");
+
+PyDoc_STRVAR(__Pyx_async_aiter_doc,
+"__aiter__(v) -> return an asynchronous iterator.");
+
+PyDoc_STRVAR(__Pyx_async_anext_doc,
+"__anext__(v) -> continue asynchronous iteration and return the next element.");
+
+static PyMethodDef __Pyx_async_gen_methods[] = {
+ {"asend", (PyCFunction)__Pyx_async_gen_asend, METH_O, __Pyx_async_asend_doc},
+ {"athrow",(PyCFunction)__Pyx_async_gen_athrow, METH_VARARGS, __Pyx_async_athrow_doc},
+ {"aclose", (PyCFunction)__Pyx_async_gen_aclose, METH_NOARGS, __Pyx_async_aclose_doc},
+ {"__aiter__", (PyCFunction)__Pyx_async_gen_self_method, METH_NOARGS, __Pyx_async_aiter_doc},
+ {"__anext__", (PyCFunction)__Pyx_async_gen_anext_method, METH_NOARGS, __Pyx_async_anext_doc},
+ {0, 0, 0, 0} /* Sentinel */
+};
+
+
+#if CYTHON_USE_ASYNC_SLOTS
+static __Pyx_PyAsyncMethodsStruct __Pyx_async_gen_as_async = {
+ 0, /* am_await */
+ PyObject_SelfIter, /* am_aiter */
+ (unaryfunc)__Pyx_async_gen_anext /* am_anext */
+};
+#endif
+
+static PyTypeObject __pyx_AsyncGenType_type = {
+ PyVarObject_HEAD_INIT(0, 0)
+ "async_generator", /* tp_name */
+ sizeof(__pyx_PyAsyncGenObject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ (destructor)__Pyx_Coroutine_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+#if CYTHON_USE_ASYNC_SLOTS
+ &__Pyx_async_gen_as_async, /* tp_as_async */
+#else
+ 0, /*tp_reserved*/
+#endif
+ (reprfunc)__Pyx_async_gen_repr, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC |
+ Py_TPFLAGS_HAVE_FINALIZE, /* tp_flags */
+ 0, /* tp_doc */
+ (traverseproc)__Pyx_async_gen_traverse, /* tp_traverse */
+ 0, /* tp_clear */
+#if CYTHON_USE_ASYNC_SLOTS && CYTHON_COMPILING_IN_CPYTHON && PY_MAJOR_VERSION >= 3 && PY_VERSION_HEX < 0x030500B1
+ // in order to (mis-)use tp_reserved above, we must also implement tp_richcompare
+ __Pyx_Coroutine_compare, /*tp_richcompare*/
+#else
+ 0, /*tp_richcompare*/
+#endif
+ offsetof(__pyx_CoroutineObject, gi_weakreflist), /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ __Pyx_async_gen_methods, /* tp_methods */
+ __Pyx_async_gen_memberlist, /* tp_members */
+ __Pyx_async_gen_getsetlist, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ 0, /* tp_alloc */
+ 0, /* tp_new */
+ 0, /* tp_free */
+ 0, /* tp_is_gc */
+ 0, /* tp_bases */
+ 0, /* tp_mro */
+ 0, /* tp_cache */
+ 0, /* tp_subclasses */
+ 0, /* tp_weaklist */
+#if CYTHON_USE_TP_FINALIZE
+ 0, /*tp_del*/
+#else
+ __Pyx_Coroutine_del, /*tp_del*/
+#endif
+ 0, /* tp_version_tag */
+#if CYTHON_USE_TP_FINALIZE
+ __Pyx_Coroutine_del, /* tp_finalize */
+#elif PY_VERSION_HEX >= 0x030400a1
+ 0, /* tp_finalize */
+#endif
+#if PY_VERSION_HEX >= 0x030800b1
+ 0, /*tp_vectorcall*/
+#endif
+#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000
+ 0, /*tp_print*/
+#endif
+};
+
+
+static int
+__Pyx_PyAsyncGen_ClearFreeLists(void)
+{
+ int ret = __Pyx_ag_value_freelist_free + __Pyx_ag_asend_freelist_free;
+
+ while (__Pyx_ag_value_freelist_free) {
+ __pyx__PyAsyncGenWrappedValue *o;
+ o = __Pyx_ag_value_freelist[--__Pyx_ag_value_freelist_free];
+ assert(__pyx__PyAsyncGenWrappedValue_CheckExact(o));
+ PyObject_GC_Del(o);
+ }
+
+ while (__Pyx_ag_asend_freelist_free) {
+ __pyx_PyAsyncGenASend *o;
+ o = __Pyx_ag_asend_freelist[--__Pyx_ag_asend_freelist_free];
+ assert(Py_TYPE(o) == __pyx__PyAsyncGenASendType);
+ PyObject_GC_Del(o);
+ }
+
+ return ret;
+}
+
+static void
+__Pyx_PyAsyncGen_Fini(void)
+{
+ __Pyx_PyAsyncGen_ClearFreeLists();
+}
+
+
+static PyObject *
+__Pyx_async_gen_unwrap_value(__pyx_PyAsyncGenObject *gen, PyObject *result)
+{
+ if (result == NULL) {
+ PyObject *exc_type = PyErr_Occurred();
+ if (!exc_type) {
+ PyErr_SetNone(__Pyx_PyExc_StopAsyncIteration);
+ gen->ag_closed = 1;
+ } else if (__Pyx_PyErr_GivenExceptionMatches2(exc_type, __Pyx_PyExc_StopAsyncIteration, PyExc_GeneratorExit)) {
+ gen->ag_closed = 1;
+ }
+
+ return NULL;
+ }
+
+ if (__pyx__PyAsyncGenWrappedValue_CheckExact(result)) {
+ /* async yield */
+ __Pyx_ReturnWithStopIteration(((__pyx__PyAsyncGenWrappedValue*)result)->agw_val);
+ Py_DECREF(result);
+ return NULL;
+ }
+
+ return result;
+}
+
+
+/* ---------- Async Generator ASend Awaitable ------------ */
+
+
+static void
+__Pyx_async_gen_asend_dealloc(__pyx_PyAsyncGenASend *o)
+{
+ PyObject_GC_UnTrack((PyObject *)o);
+ Py_CLEAR(o->ags_gen);
+ Py_CLEAR(o->ags_sendval);
+ if (__Pyx_ag_asend_freelist_free < _PyAsyncGen_MAXFREELIST) {
+ assert(__pyx_PyAsyncGenASend_CheckExact(o));
+ __Pyx_ag_asend_freelist[__Pyx_ag_asend_freelist_free++] = o;
+ } else {
+ PyObject_GC_Del(o);
+ }
+}
+
+static int
+__Pyx_async_gen_asend_traverse(__pyx_PyAsyncGenASend *o, visitproc visit, void *arg)
+{
+ Py_VISIT(o->ags_gen);
+ Py_VISIT(o->ags_sendval);
+ return 0;
+}
+
+
+static PyObject *
+__Pyx_async_gen_asend_send(PyObject *g, PyObject *arg)
+{
+ __pyx_PyAsyncGenASend *o = (__pyx_PyAsyncGenASend*) g;
+ PyObject *result;
+
+ if (unlikely(o->ags_state == __PYX_AWAITABLE_STATE_CLOSED)) {
+ PyErr_SetNone(PyExc_StopIteration);
+ return NULL;
+ }
+
+ if (o->ags_state == __PYX_AWAITABLE_STATE_INIT) {
+ if (arg == NULL || arg == Py_None) {
+ arg = o->ags_sendval ? o->ags_sendval : Py_None;
+ }
+ o->ags_state = __PYX_AWAITABLE_STATE_ITER;
+ }
+
+ result = __Pyx_Coroutine_Send((PyObject*)o->ags_gen, arg);
+ result = __Pyx_async_gen_unwrap_value(o->ags_gen, result);
+
+ if (result == NULL) {
+ o->ags_state = __PYX_AWAITABLE_STATE_CLOSED;
+ }
+
+ return result;
+}
+
+
+static CYTHON_INLINE PyObject *
+__Pyx_async_gen_asend_iternext(PyObject *o)
+{
+ return __Pyx_async_gen_asend_send(o, Py_None);
+}
+
+
+static PyObject *
+__Pyx_async_gen_asend_throw(__pyx_PyAsyncGenASend *o, PyObject *args)
+{
+ PyObject *result;
+
+ if (unlikely(o->ags_state == __PYX_AWAITABLE_STATE_CLOSED)) {
+ PyErr_SetNone(PyExc_StopIteration);
+ return NULL;
+ }
+
+ result = __Pyx_Coroutine_Throw((PyObject*)o->ags_gen, args);
+ result = __Pyx_async_gen_unwrap_value(o->ags_gen, result);
+
+ if (result == NULL) {
+ o->ags_state = __PYX_AWAITABLE_STATE_CLOSED;
+ }
+
+ return result;
+}
+
+
+static PyObject *
+__Pyx_async_gen_asend_close(PyObject *g, CYTHON_UNUSED PyObject *args)
+{
+ __pyx_PyAsyncGenASend *o = (__pyx_PyAsyncGenASend*) g;
+ o->ags_state = __PYX_AWAITABLE_STATE_CLOSED;
+ Py_RETURN_NONE;
+}
+
+
+static PyMethodDef __Pyx_async_gen_asend_methods[] = {
+ {"send", (PyCFunction)__Pyx_async_gen_asend_send, METH_O, __Pyx_async_gen_send_doc},
+ {"throw", (PyCFunction)__Pyx_async_gen_asend_throw, METH_VARARGS, __Pyx_async_gen_throw_doc},
+ {"close", (PyCFunction)__Pyx_async_gen_asend_close, METH_NOARGS, __Pyx_async_gen_close_doc},
+ {"__await__", (PyCFunction)__Pyx_async_gen_self_method, METH_NOARGS, __Pyx_async_gen_await_doc},
+ {0, 0, 0, 0} /* Sentinel */
+};
+
+
+#if CYTHON_USE_ASYNC_SLOTS
+static __Pyx_PyAsyncMethodsStruct __Pyx_async_gen_asend_as_async = {
+ PyObject_SelfIter, /* am_await */
+ 0, /* am_aiter */
+ 0 /* am_anext */
+};
+#endif
+
+
+static PyTypeObject __pyx__PyAsyncGenASendType_type = {
+ PyVarObject_HEAD_INIT(0, 0)
+ "async_generator_asend", /* tp_name */
+ sizeof(__pyx_PyAsyncGenASend), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ /* methods */
+ (destructor)__Pyx_async_gen_asend_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+#if CYTHON_USE_ASYNC_SLOTS
+ &__Pyx_async_gen_asend_as_async, /* tp_as_async */
+#else
+ 0, /*tp_reserved*/
+#endif
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */
+ 0, /* tp_doc */
+ (traverseproc)__Pyx_async_gen_asend_traverse, /* tp_traverse */
+ 0, /* tp_clear */
+#if CYTHON_USE_ASYNC_SLOTS && CYTHON_COMPILING_IN_CPYTHON && PY_MAJOR_VERSION >= 3 && PY_VERSION_HEX < 0x030500B1
+ // in order to (mis-)use tp_reserved above, we must also implement tp_richcompare
+ __Pyx_Coroutine_compare, /*tp_richcompare*/
+#else
+ 0, /*tp_richcompare*/
+#endif
+ 0, /* tp_weaklistoffset */
+ PyObject_SelfIter, /* tp_iter */
+ (iternextfunc)__Pyx_async_gen_asend_iternext, /* tp_iternext */
+ __Pyx_async_gen_asend_methods, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ 0, /* tp_alloc */
+ 0, /* tp_new */
+ 0, /* tp_free */
+ 0, /* tp_is_gc */
+ 0, /* tp_bases */
+ 0, /* tp_mro */
+ 0, /* tp_cache */
+ 0, /* tp_subclasses */
+ 0, /* tp_weaklist */
+ 0, /* tp_del */
+ 0, /* tp_version_tag */
+#if PY_VERSION_HEX >= 0x030400a1
+ 0, /* tp_finalize */
+#endif
+#if PY_VERSION_HEX >= 0x030800b1
+ 0, /*tp_vectorcall*/
+#endif
+#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000
+ 0, /*tp_print*/
+#endif
+};
+
+
+static PyObject *
+__Pyx_async_gen_asend_new(__pyx_PyAsyncGenObject *gen, PyObject *sendval)
+{
+ __pyx_PyAsyncGenASend *o;
+ if (__Pyx_ag_asend_freelist_free) {
+ __Pyx_ag_asend_freelist_free--;
+ o = __Pyx_ag_asend_freelist[__Pyx_ag_asend_freelist_free];
+ _Py_NewReference((PyObject *)o);
+ } else {
+ o = PyObject_GC_New(__pyx_PyAsyncGenASend, __pyx__PyAsyncGenASendType);
+ if (o == NULL) {
+ return NULL;
+ }
+ }
+
+ Py_INCREF(gen);
+ o->ags_gen = gen;
+
+ Py_XINCREF(sendval);
+ o->ags_sendval = sendval;
+
+ o->ags_state = __PYX_AWAITABLE_STATE_INIT;
+
+ PyObject_GC_Track((PyObject*)o);
+ return (PyObject*)o;
+}
+
+
+/* ---------- Async Generator Value Wrapper ------------ */
+
+
+static void
+__Pyx_async_gen_wrapped_val_dealloc(__pyx__PyAsyncGenWrappedValue *o)
+{
+ PyObject_GC_UnTrack((PyObject *)o);
+ Py_CLEAR(o->agw_val);
+ if (__Pyx_ag_value_freelist_free < _PyAsyncGen_MAXFREELIST) {
+ assert(__pyx__PyAsyncGenWrappedValue_CheckExact(o));
+ __Pyx_ag_value_freelist[__Pyx_ag_value_freelist_free++] = o;
+ } else {
+ PyObject_GC_Del(o);
+ }
+}
+
+
+static int
+__Pyx_async_gen_wrapped_val_traverse(__pyx__PyAsyncGenWrappedValue *o,
+ visitproc visit, void *arg)
+{
+ Py_VISIT(o->agw_val);
+ return 0;
+}
+
+
+static PyTypeObject __pyx__PyAsyncGenWrappedValueType_type = {
+ PyVarObject_HEAD_INIT(0, 0)
+ "async_generator_wrapped_value", /* tp_name */
+ sizeof(__pyx__PyAsyncGenWrappedValue), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ /* methods */
+ (destructor)__Pyx_async_gen_wrapped_val_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_as_async */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */
+ 0, /* tp_doc */
+ (traverseproc)__Pyx_async_gen_wrapped_val_traverse, /* tp_traverse */
+ 0, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ 0, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ 0, /* tp_alloc */
+ 0, /* tp_new */
+ 0, /* tp_free */
+ 0, /* tp_is_gc */
+ 0, /* tp_bases */
+ 0, /* tp_mro */
+ 0, /* tp_cache */
+ 0, /* tp_subclasses */
+ 0, /* tp_weaklist */
+ 0, /* tp_del */
+ 0, /* tp_version_tag */
+#if PY_VERSION_HEX >= 0x030400a1
+ 0, /* tp_finalize */
+#endif
+#if PY_VERSION_HEX >= 0x030800b1
+ 0, /*tp_vectorcall*/
+#endif
+#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000
+ 0, /*tp_print*/
+#endif
+};
+
+
+static PyObject *
+__Pyx__PyAsyncGenValueWrapperNew(PyObject *val)
+{
+ // NOTE: steals a reference to val !
+ __pyx__PyAsyncGenWrappedValue *o;
+ assert(val);
+
+ if (__Pyx_ag_value_freelist_free) {
+ __Pyx_ag_value_freelist_free--;
+ o = __Pyx_ag_value_freelist[__Pyx_ag_value_freelist_free];
+ assert(__pyx__PyAsyncGenWrappedValue_CheckExact(o));
+ _Py_NewReference((PyObject*)o);
+ } else {
+ o = PyObject_GC_New(__pyx__PyAsyncGenWrappedValue, __pyx__PyAsyncGenWrappedValueType);
+ if (unlikely(!o)) {
+ Py_DECREF(val);
+ return NULL;
+ }
+ }
+ o->agw_val = val;
+ // no Py_INCREF(val) - steals reference!
+ PyObject_GC_Track((PyObject*)o);
+ return (PyObject*)o;
+}
+
+
+/* ---------- Async Generator AThrow awaitable ------------ */
+
+
+static void
+__Pyx_async_gen_athrow_dealloc(__pyx_PyAsyncGenAThrow *o)
+{
+ PyObject_GC_UnTrack((PyObject *)o);
+ Py_CLEAR(o->agt_gen);
+ Py_CLEAR(o->agt_args);
+ PyObject_GC_Del(o);
+}
+
+
+static int
+__Pyx_async_gen_athrow_traverse(__pyx_PyAsyncGenAThrow *o, visitproc visit, void *arg)
+{
+ Py_VISIT(o->agt_gen);
+ Py_VISIT(o->agt_args);
+ return 0;
+}
+
+
+static PyObject *
+__Pyx_async_gen_athrow_send(__pyx_PyAsyncGenAThrow *o, PyObject *arg)
+{
+ __pyx_CoroutineObject *gen = (__pyx_CoroutineObject*)o->agt_gen;
+ PyObject *retval;
+
+ if (o->agt_state == __PYX_AWAITABLE_STATE_CLOSED) {
+ PyErr_SetNone(PyExc_StopIteration);
+ return NULL;
+ }
+
+ if (o->agt_state == __PYX_AWAITABLE_STATE_INIT) {
+ if (o->agt_gen->ag_closed) {
+ PyErr_SetNone(PyExc_StopIteration);
+ return NULL;
+ }
+
+ if (arg != Py_None) {
+ PyErr_SetString(PyExc_RuntimeError, __Pyx_NON_INIT_CORO_MSG);
+ return NULL;
+ }
+
+ o->agt_state = __PYX_AWAITABLE_STATE_ITER;
+
+ if (o->agt_args == NULL) {
+ /* aclose() mode */
+ o->agt_gen->ag_closed = 1;
+
+ retval = __Pyx__Coroutine_Throw((PyObject*)gen,
+ /* Do not close generator when
+ PyExc_GeneratorExit is passed */
+ PyExc_GeneratorExit, NULL, NULL, NULL, 0);
+
+ if (retval && __pyx__PyAsyncGenWrappedValue_CheckExact(retval)) {
+ Py_DECREF(retval);
+ goto yield_close;
+ }
+ } else {
+ PyObject *typ;
+ PyObject *tb = NULL;
+ PyObject *val = NULL;
+
+ if (!PyArg_UnpackTuple(o->agt_args, "athrow", 1, 3,
+ &typ, &val, &tb)) {
+ return NULL;
+ }
+
+ retval = __Pyx__Coroutine_Throw((PyObject*)gen,
+ /* Do not close generator when PyExc_GeneratorExit is passed */
+ typ, val, tb, o->agt_args, 0);
+ retval = __Pyx_async_gen_unwrap_value(o->agt_gen, retval);
+ }
+ if (retval == NULL) {
+ goto check_error;
+ }
+ return retval;
+ }
+
+ assert (o->agt_state == __PYX_AWAITABLE_STATE_ITER);
+
+ retval = __Pyx_Coroutine_Send((PyObject *)gen, arg);
+ if (o->agt_args) {
+ return __Pyx_async_gen_unwrap_value(o->agt_gen, retval);
+ } else {
+ /* aclose() mode */
+ if (retval) {
+ if (__pyx__PyAsyncGenWrappedValue_CheckExact(retval)) {
+ Py_DECREF(retval);
+ goto yield_close;
+ }
+ else {
+ return retval;
+ }
+ }
+ else {
+ goto check_error;
+ }
+ }
+
+yield_close:
+ PyErr_SetString(
+ PyExc_RuntimeError, __Pyx_ASYNC_GEN_IGNORED_EXIT_MSG);
+ return NULL;
+
+check_error:
+ if (PyErr_ExceptionMatches(__Pyx_PyExc_StopAsyncIteration)) {
+ o->agt_state = __PYX_AWAITABLE_STATE_CLOSED;
+ if (o->agt_args == NULL) {
+ // when aclose() is called we don't want to propagate
+ // StopAsyncIteration; just raise StopIteration, signalling
+ // that 'aclose()' is done.
+ PyErr_Clear();
+ PyErr_SetNone(PyExc_StopIteration);
+ }
+ }
+ else if (PyErr_ExceptionMatches(PyExc_GeneratorExit)) {
+ o->agt_state = __PYX_AWAITABLE_STATE_CLOSED;
+ PyErr_Clear(); /* ignore these errors */
+ PyErr_SetNone(PyExc_StopIteration);
+ }
+ return NULL;
+}
+
+
+static PyObject *
+__Pyx_async_gen_athrow_throw(__pyx_PyAsyncGenAThrow *o, PyObject *args)
+{
+ PyObject *retval;
+
+ if (o->agt_state == __PYX_AWAITABLE_STATE_INIT) {
+ PyErr_SetString(PyExc_RuntimeError, __Pyx_NON_INIT_CORO_MSG);
+ return NULL;
+ }
+
+ if (o->agt_state == __PYX_AWAITABLE_STATE_CLOSED) {
+ PyErr_SetNone(PyExc_StopIteration);
+ return NULL;
+ }
+
+ retval = __Pyx_Coroutine_Throw((PyObject*)o->agt_gen, args);
+ if (o->agt_args) {
+ return __Pyx_async_gen_unwrap_value(o->agt_gen, retval);
+ } else {
+ /* aclose() mode */
+ if (retval && __pyx__PyAsyncGenWrappedValue_CheckExact(retval)) {
+ Py_DECREF(retval);
+ PyErr_SetString(PyExc_RuntimeError, __Pyx_ASYNC_GEN_IGNORED_EXIT_MSG);
+ return NULL;
+ }
+ return retval;
+ }
+}
+
+
+static PyObject *
+__Pyx_async_gen_athrow_iternext(__pyx_PyAsyncGenAThrow *o)
+{
+ return __Pyx_async_gen_athrow_send(o, Py_None);
+}
+
+
+static PyObject *
+__Pyx_async_gen_athrow_close(PyObject *g, CYTHON_UNUSED PyObject *args)
+{
+ __pyx_PyAsyncGenAThrow *o = (__pyx_PyAsyncGenAThrow*) g;
+ o->agt_state = __PYX_AWAITABLE_STATE_CLOSED;
+ Py_RETURN_NONE;
+}
+
+
+static PyMethodDef __Pyx_async_gen_athrow_methods[] = {
+ {"send", (PyCFunction)__Pyx_async_gen_athrow_send, METH_O, __Pyx_async_gen_send_doc},
+ {"throw", (PyCFunction)__Pyx_async_gen_athrow_throw, METH_VARARGS, __Pyx_async_gen_throw_doc},
+ {"close", (PyCFunction)__Pyx_async_gen_athrow_close, METH_NOARGS, __Pyx_async_gen_close_doc},
+ {"__await__", (PyCFunction)__Pyx_async_gen_self_method, METH_NOARGS, __Pyx_async_gen_await_doc},
+ {0, 0, 0, 0} /* Sentinel */
+};
+
+
+#if CYTHON_USE_ASYNC_SLOTS
+static __Pyx_PyAsyncMethodsStruct __Pyx_async_gen_athrow_as_async = {
+ PyObject_SelfIter, /* am_await */
+ 0, /* am_aiter */
+ 0 /* am_anext */
+};
+#endif
+
+
+static PyTypeObject __pyx__PyAsyncGenAThrowType_type = {
+ PyVarObject_HEAD_INIT(0, 0)
+ "async_generator_athrow", /* tp_name */
+ sizeof(__pyx_PyAsyncGenAThrow), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ (destructor)__Pyx_async_gen_athrow_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+#if CYTHON_USE_ASYNC_SLOTS
+ &__Pyx_async_gen_athrow_as_async, /* tp_as_async */
+#else
+ 0, /*tp_reserved*/
+#endif
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */
+ 0, /* tp_doc */
+ (traverseproc)__Pyx_async_gen_athrow_traverse, /* tp_traverse */
+ 0, /* tp_clear */
+#if CYTHON_USE_ASYNC_SLOTS && CYTHON_COMPILING_IN_CPYTHON && PY_MAJOR_VERSION >= 3 && PY_VERSION_HEX < 0x030500B1
+ // in order to (mis-)use tp_reserved above, we must also implement tp_richcompare
+ __Pyx_Coroutine_compare, /*tp_richcompare*/
+#else
+ 0, /*tp_richcompare*/
+#endif
+ 0, /* tp_weaklistoffset */
+ PyObject_SelfIter, /* tp_iter */
+ (iternextfunc)__Pyx_async_gen_athrow_iternext, /* tp_iternext */
+ __Pyx_async_gen_athrow_methods, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ 0, /* tp_alloc */
+ 0, /* tp_new */
+ 0, /* tp_free */
+ 0, /* tp_is_gc */
+ 0, /* tp_bases */
+ 0, /* tp_mro */
+ 0, /* tp_cache */
+ 0, /* tp_subclasses */
+ 0, /* tp_weaklist */
+ 0, /* tp_del */
+ 0, /* tp_version_tag */
+#if PY_VERSION_HEX >= 0x030400a1
+ 0, /* tp_finalize */
+#endif
+#if PY_VERSION_HEX >= 0x030800b1
+ 0, /*tp_vectorcall*/
+#endif
+#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000
+ 0, /*tp_print*/
+#endif
+};
+
+
+static PyObject *
+__Pyx_async_gen_athrow_new(__pyx_PyAsyncGenObject *gen, PyObject *args)
+{
+ __pyx_PyAsyncGenAThrow *o;
+ o = PyObject_GC_New(__pyx_PyAsyncGenAThrow, __pyx__PyAsyncGenAThrowType);
+ if (o == NULL) {
+ return NULL;
+ }
+ o->agt_gen = gen;
+ o->agt_args = args;
+ o->agt_state = __PYX_AWAITABLE_STATE_INIT;
+ Py_INCREF(gen);
+ Py_XINCREF(args);
+ PyObject_GC_Track((PyObject*)o);
+ return (PyObject*)o;
+}
+
+
+/* ---------- global type sharing ------------ */
+
+static int __pyx_AsyncGen_init(void) {
+ // on Windows, C-API functions can't be used in slots statically
+ __pyx_AsyncGenType_type.tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict;
+ __pyx__PyAsyncGenWrappedValueType_type.tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict;
+ __pyx__PyAsyncGenAThrowType_type.tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict;
+ __pyx__PyAsyncGenASendType_type.tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict;
+
+ __pyx_AsyncGenType = __Pyx_FetchCommonType(&__pyx_AsyncGenType_type);
+ if (unlikely(!__pyx_AsyncGenType))
+ return -1;
+
+ __pyx__PyAsyncGenAThrowType = __Pyx_FetchCommonType(&__pyx__PyAsyncGenAThrowType_type);
+ if (unlikely(!__pyx__PyAsyncGenAThrowType))
+ return -1;
+
+ __pyx__PyAsyncGenWrappedValueType = __Pyx_FetchCommonType(&__pyx__PyAsyncGenWrappedValueType_type);
+ if (unlikely(!__pyx__PyAsyncGenWrappedValueType))
+ return -1;
+
+ __pyx__PyAsyncGenASendType = __Pyx_FetchCommonType(&__pyx__PyAsyncGenASendType_type);
+ if (unlikely(!__pyx__PyAsyncGenASendType))
+ return -1;
+
+ return 0;
+}
diff -Nru cython-0.26.1/Cython/Utility/Buffer.c cython-0.29.14/Cython/Utility/Buffer.c
--- cython-0.26.1/Cython/Utility/Buffer.c 2017-08-25 17:18:32.000000000 +0000
+++ cython-0.29.14/Cython/Utility/Buffer.c 2019-11-01 14:13:39.000000000 +0000
@@ -52,6 +52,7 @@
}
/////////////// BufferFormatStructs.proto ///////////////
+//@proto_block: utility_code_proto_before_types
#define IS_UNSIGNED(type) (((type) -1) > 0)
@@ -95,7 +96,9 @@
char is_valid_array;
} __Pyx_BufFmt_Context;
+
/////////////// GetAndReleaseBuffer.proto ///////////////
+
#if PY_MAJOR_VERSION < 3
static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags);
static void __Pyx_ReleaseBuffer(Py_buffer *view);
@@ -105,13 +108,14 @@
#endif
/////////////// GetAndReleaseBuffer ///////////////
+
#if PY_MAJOR_VERSION < 3
static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) {
if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags);
{{for type_ptr, getbuffer, releasebuffer in types}}
{{if getbuffer}}
- if (PyObject_TypeCheck(obj, {{type_ptr}})) return {{getbuffer}}(obj, view, flags);
+ if (__Pyx_TypeCheck(obj, {{type_ptr}})) return {{getbuffer}}(obj, view, flags);
{{endif}}
{{endfor}}
@@ -131,7 +135,7 @@
if ((0)) {}
{{for type_ptr, getbuffer, releasebuffer in types}}
{{if releasebuffer}}
- else if (PyObject_TypeCheck(obj, {{type_ptr}})) {{releasebuffer}}(obj, view);
+ else if (__Pyx_TypeCheck(obj, {{type_ptr}})) {{releasebuffer}}(obj, view);
{{endif}}
{{endfor}}
@@ -141,32 +145,98 @@
#endif /* PY_MAJOR_VERSION < 3 */
-/////////////// BufferFormatCheck.proto ///////////////
-{{#
- Buffer format string checking
+/////////////// BufferGetAndValidate.proto ///////////////
- Buffer type checking. Utility code for checking that acquired
- buffers match our assumptions. We only need to check ndim and
- the format string; the access mode/flags is checked by the
- exporter. See:
+#define __Pyx_GetBufferAndValidate(buf, obj, dtype, flags, nd, cast, stack) \
+ ((obj == Py_None || obj == NULL) ? \
+ (__Pyx_ZeroBuffer(buf), 0) : \
+ __Pyx__GetBufferAndValidate(buf, obj, dtype, flags, nd, cast, stack))
- http://docs.python.org/3/library/struct.html
- http://legacy.python.org/dev/peps/pep-3118/#additions-to-the-struct-string-syntax
+static int __Pyx__GetBufferAndValidate(Py_buffer* buf, PyObject* obj,
+ __Pyx_TypeInfo* dtype, int flags, int nd, int cast, __Pyx_BufFmt_StackElem* stack);
+static void __Pyx_ZeroBuffer(Py_buffer* buf);
+static CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info);/*proto*/
- The alignment code is copied from _struct.c in Python.
-}}
+static Py_ssize_t __Pyx_minusones[] = { {{ ", ".join(["-1"] * max_dims) }} };
+static Py_ssize_t __Pyx_zeros[] = { {{ ", ".join(["0"] * max_dims) }} };
+
+
+/////////////// BufferGetAndValidate ///////////////
+//@requires: BufferFormatCheck
+
+static CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info) {
+ if (unlikely(info->buf == NULL)) return;
+ if (info->suboffsets == __Pyx_minusones) info->suboffsets = NULL;
+ __Pyx_ReleaseBuffer(info);
+}
+
+static void __Pyx_ZeroBuffer(Py_buffer* buf) {
+ buf->buf = NULL;
+ buf->obj = NULL;
+ buf->strides = __Pyx_zeros;
+ buf->shape = __Pyx_zeros;
+ buf->suboffsets = __Pyx_minusones;
+}
+
+static int __Pyx__GetBufferAndValidate(
+ Py_buffer* buf, PyObject* obj, __Pyx_TypeInfo* dtype, int flags,
+ int nd, int cast, __Pyx_BufFmt_StackElem* stack)
+{
+ buf->buf = NULL;
+ if (unlikely(__Pyx_GetBuffer(obj, buf, flags) == -1)) {
+ __Pyx_ZeroBuffer(buf);
+ return -1;
+ }
+ // From this point on, we have acquired the buffer and must release it on errors.
+ if (unlikely(buf->ndim != nd)) {
+ PyErr_Format(PyExc_ValueError,
+ "Buffer has wrong number of dimensions (expected %d, got %d)",
+ nd, buf->ndim);
+ goto fail;
+ }
+ if (!cast) {
+ __Pyx_BufFmt_Context ctx;
+ __Pyx_BufFmt_Init(&ctx, stack, dtype);
+ if (!__Pyx_BufFmt_CheckString(&ctx, buf->format)) goto fail;
+ }
+ if (unlikely((size_t)buf->itemsize != dtype->size)) {
+ PyErr_Format(PyExc_ValueError,
+ "Item size of buffer (%" CYTHON_FORMAT_SSIZE_T "d byte%s) does not match size of '%s' (%" CYTHON_FORMAT_SSIZE_T "d byte%s)",
+ buf->itemsize, (buf->itemsize > 1) ? "s" : "",
+ dtype->name, (Py_ssize_t)dtype->size, (dtype->size > 1) ? "s" : "");
+ goto fail;
+ }
+ if (buf->suboffsets == NULL) buf->suboffsets = __Pyx_minusones;
+ return 0;
+fail:;
+ __Pyx_SafeReleaseBuffer(buf);
+ return -1;
+}
+
+
+/////////////// BufferFormatCheck.proto ///////////////
+
+// Buffer format string checking
+//
+// Buffer type checking. Utility code for checking that acquired
+// buffers match our assumptions. We only need to check ndim and
+// the format string; the access mode/flags is checked by the
+// exporter. See:
+//
+// http://docs.python.org/3/library/struct.html
+// http://legacy.python.org/dev/peps/pep-3118/#additions-to-the-struct-string-syntax
+//
+// The alignment code is copied from _struct.c in Python.
-static CYTHON_INLINE int __Pyx_GetBufferAndValidate(Py_buffer* buf, PyObject* obj,
- __Pyx_TypeInfo* dtype, int flags, int nd, int cast, __Pyx_BufFmt_StackElem* stack);
-static CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info);
static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts);
static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,
__Pyx_BufFmt_StackElem* stack,
- __Pyx_TypeInfo* type); /* PROTO */
-
+ __Pyx_TypeInfo* type); /*proto*/
/////////////// BufferFormatCheck ///////////////
+//@requires: ModuleSetupCode.c::IsLittleEndian
+//@requires: BufferFormatStructs
static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,
__Pyx_BufFmt_StackElem* stack,
@@ -203,7 +273,7 @@
return -1;
} else {
count = *t++ - '0';
- while (*t >= '0' && *t < '9') {
+ while (*t >= '0' && *t <= '9') {
count *= 10;
count += *t++ - '0';
}
@@ -228,6 +298,7 @@
static const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) {
switch (ch) {
+ case '?': return "'bool'";
case 'c': return "'char'";
case 'b': return "'signed char'";
case 'B': return "'unsigned char'";
@@ -272,7 +343,7 @@
static size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) {
switch (ch) {
- case 'c': case 'b': case 'B': case 's': case 'p': return 1;
+ case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1;
case 'h': case 'H': return sizeof(short);
case 'i': case 'I': return sizeof(int);
case 'l': case 'L': return sizeof(long);
@@ -361,7 +432,7 @@
case 'b': case 'h': case 'i':
case 'l': case 'q': case 's': case 'p':
return 'I';
- case 'B': case 'H': case 'I': case 'L': case 'Q':
+ case '?': case 'B': case 'H': case 'I': case 'L': case 'Q':
return 'U';
case 'f': case 'd': case 'g':
return (is_complex ? 'C' : 'R');
@@ -527,7 +598,7 @@
}
/* Parse an array in the format string (e.g. (1,2,3)) */
-static CYTHON_INLINE PyObject *
+static PyObject *
__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp)
{
const char *ts = *tsp;
@@ -681,8 +752,8 @@
__Pyx_BufFmt_RaiseUnexpectedChar('Z');
return NULL;
}
- /* fall through */
- case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I':
+ CYTHON_FALLTHROUGH;
+ case '?': case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I':
case 'l': case 'L': case 'q': case 'Q':
case 'f': case 'd': case 'g':
case 'O': case 'p':
@@ -695,7 +766,7 @@
++ts;
break;
}
- /* fall through */
+ CYTHON_FALLTHROUGH;
case 's':
/* 's' or new type (cannot be added to current pool) */
if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;
@@ -725,60 +796,13 @@
}
}
-static CYTHON_INLINE void __Pyx_ZeroBuffer(Py_buffer* buf) {
- buf->buf = NULL;
- buf->obj = NULL;
- buf->strides = __Pyx_zeros;
- buf->shape = __Pyx_zeros;
- buf->suboffsets = __Pyx_minusones;
-}
-
-static CYTHON_INLINE int __Pyx_GetBufferAndValidate(
- Py_buffer* buf, PyObject* obj, __Pyx_TypeInfo* dtype, int flags,
- int nd, int cast, __Pyx_BufFmt_StackElem* stack)
-{
- if (obj == Py_None || obj == NULL) {
- __Pyx_ZeroBuffer(buf);
- return 0;
- }
- buf->buf = NULL;
- if (__Pyx_GetBuffer(obj, buf, flags) == -1) goto fail;
- if (buf->ndim != nd) {
- PyErr_Format(PyExc_ValueError,
- "Buffer has wrong number of dimensions (expected %d, got %d)",
- nd, buf->ndim);
- goto fail;
- }
- if (!cast) {
- __Pyx_BufFmt_Context ctx;
- __Pyx_BufFmt_Init(&ctx, stack, dtype);
- if (!__Pyx_BufFmt_CheckString(&ctx, buf->format)) goto fail;
- }
- if ((unsigned)buf->itemsize != dtype->size) {
- PyErr_Format(PyExc_ValueError,
- "Item size of buffer (%" CYTHON_FORMAT_SSIZE_T "d byte%s) does not match size of '%s' (%" CYTHON_FORMAT_SSIZE_T "d byte%s)",
- buf->itemsize, (buf->itemsize > 1) ? "s" : "",
- dtype->name, (Py_ssize_t)dtype->size, (dtype->size > 1) ? "s" : "");
- goto fail;
- }
- if (buf->suboffsets == NULL) buf->suboffsets = __Pyx_minusones;
- return 0;
-fail:;
- __Pyx_ZeroBuffer(buf);
- return -1;
-}
-
-static CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info) {
- if (info->buf == NULL) return;
- if (info->suboffsets == __Pyx_minusones) info->suboffsets = NULL;
- __Pyx_ReleaseBuffer(info);
-}
-
/////////////// TypeInfoCompare.proto ///////////////
static int __pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b);
/////////////// TypeInfoCompare ///////////////
-/* See if two dtypes are equal */
+//@requires: BufferFormatStructs
+
+// See if two dtypes are equal
static int
__pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b)
{
@@ -837,7 +861,6 @@
}
-
/////////////// TypeInfoToFormat.proto ///////////////
struct __pyx_typeinfo_string {
char string[3];
@@ -845,7 +868,9 @@
static struct __pyx_typeinfo_string __Pyx_TypeInfoToFormat(__Pyx_TypeInfo *type);
/////////////// TypeInfoToFormat ///////////////
-{{# See also MemoryView.pyx:BufferFormatFromTypeInfo }}
+//@requires: BufferFormatStructs
+
+// See also MemoryView.pyx:BufferFormatFromTypeInfo
static struct __pyx_typeinfo_string __Pyx_TypeInfoToFormat(__Pyx_TypeInfo *type) {
struct __pyx_typeinfo_string result = { {0} };
diff -Nru cython-0.26.1/Cython/Utility/Builtins.c cython-0.29.14/Cython/Utility/Builtins.c
--- cython-0.26.1/Cython/Utility/Builtins.c 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Utility/Builtins.c 2018-09-22 14:18:56.000000000 +0000
@@ -109,7 +109,7 @@
locals = globals;
}
- if (PyDict_GetItem(globals, PYIDENT("__builtins__")) == NULL) {
+ if (__Pyx_PyDict_GetItemStr(globals, PYIDENT("__builtins__")) == NULL) {
if (PyDict_SetItem(globals, PYIDENT("__builtins__"), PyEval_GetBuiltins()) < 0)
goto bad;
}
@@ -168,19 +168,23 @@
//////////////////// GetAttr3 ////////////////////
//@requires: ObjectHandling.c::GetAttr
+//@requires: Exceptions.c::PyThreadStateGet
+//@requires: Exceptions.c::PyErrFetchRestore
+//@requires: Exceptions.c::PyErrExceptionMatches
+
+static PyObject *__Pyx_GetAttr3Default(PyObject *d) {
+ __Pyx_PyThreadState_declare
+ __Pyx_PyThreadState_assign
+ if (unlikely(!__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError)))
+ return NULL;
+ __Pyx_PyErr_Clear();
+ Py_INCREF(d);
+ return d;
+}
static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *o, PyObject *n, PyObject *d) {
PyObject *r = __Pyx_GetAttr(o, n);
- if (unlikely(!r)) {
- if (!PyErr_ExceptionMatches(PyExc_AttributeError))
- goto bad;
- PyErr_Clear();
- r = d;
- Py_INCREF(d);
- }
- return r;
-bad:
- return NULL;
+ return (likely(r)) ? r : __Pyx_GetAttr3Default(d);
}
//////////////////// HasAttr.proto ////////////////////
@@ -227,46 +231,65 @@
return s;
}
-//////////////////// abs_int.proto ////////////////////
-
-static CYTHON_INLINE unsigned int __Pyx_abs_int(int x) {
- if (unlikely(x == -INT_MAX-1))
- return ((unsigned int)INT_MAX) + 1U;
- return (unsigned int) abs(x);
-}
-
-//////////////////// abs_long.proto ////////////////////
-
-static CYTHON_INLINE unsigned long __Pyx_abs_long(long x) {
- if (unlikely(x == -LONG_MAX-1))
- return ((unsigned long)LONG_MAX) + 1U;
- return (unsigned long) labs(x);
-}
-
//////////////////// abs_longlong.proto ////////////////////
-static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_abs_longlong(PY_LONG_LONG x) {
- if (unlikely(x == -PY_LLONG_MAX-1))
- return ((unsigned PY_LONG_LONG)PY_LLONG_MAX) + 1U;
+static CYTHON_INLINE PY_LONG_LONG __Pyx_abs_longlong(PY_LONG_LONG x) {
#if defined (__cplusplus) && __cplusplus >= 201103L
- return (unsigned PY_LONG_LONG) std::abs(x);
+ return std::abs(x);
#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- return (unsigned PY_LONG_LONG) llabs(x);
-#elif defined (_MSC_VER) && defined (_M_X64)
+ return llabs(x);
+#elif defined (_MSC_VER)
// abs() is defined for long, but 64-bits type on MSVC is long long.
- // Use MS-specific _abs64 instead.
- return (unsigned PY_LONG_LONG) _abs64(x);
+ // Use MS-specific _abs64() instead, which returns the original (negative) value for abs(-MAX-1)
+ return _abs64(x);
#elif defined (__GNUC__)
// gcc or clang on 64 bit windows.
- return (unsigned PY_LONG_LONG) __builtin_llabs(x);
+ return __builtin_llabs(x);
#else
if (sizeof(PY_LONG_LONG) <= sizeof(Py_ssize_t))
return __Pyx_sst_abs(x);
- return (x<0) ? (unsigned PY_LONG_LONG)-x : (unsigned PY_LONG_LONG)x;
+ return (x<0) ? -x : x;
#endif
}
+//////////////////// py_abs.proto ////////////////////
+
+#if CYTHON_USE_PYLONG_INTERNALS
+static PyObject *__Pyx_PyLong_AbsNeg(PyObject *num);/*proto*/
+
+#define __Pyx_PyNumber_Absolute(x) \
+ ((likely(PyLong_CheckExact(x))) ? \
+ (likely(Py_SIZE(x) >= 0) ? (Py_INCREF(x), (x)) : __Pyx_PyLong_AbsNeg(x)) : \
+ PyNumber_Absolute(x))
+
+#else
+#define __Pyx_PyNumber_Absolute(x) PyNumber_Absolute(x)
+#endif
+
+//////////////////// py_abs ////////////////////
+
+#if CYTHON_USE_PYLONG_INTERNALS
+static PyObject *__Pyx_PyLong_AbsNeg(PyObject *n) {
+ if (likely(Py_SIZE(n) == -1)) {
+ // digits are unsigned
+ return PyLong_FromLong(((PyLongObject*)n)->ob_digit[0]);
+ }
+#if CYTHON_COMPILING_IN_CPYTHON
+ {
+ PyObject *copy = _PyLong_Copy((PyLongObject*)n);
+ if (likely(copy)) {
+ Py_SIZE(copy) = -(Py_SIZE(copy));
+ }
+ return copy;
+ }
+#else
+ return PyNumber_Negative(n);
+#endif
+}
+#endif
+
+
//////////////////// pow2.proto ////////////////////
#define __Pyx_PyNumber_Power2(a, b) PyNumber_Power(a, b, Py_None)
@@ -441,7 +464,12 @@
return CALL_UNBOUND_METHOD(PyDict_Type, "viewitems", d);
}
+
//////////////////// pyfrozenset_new.proto ////////////////////
+
+static CYTHON_INLINE PyObject* __Pyx_PyFrozenSet_New(PyObject* it);
+
+//////////////////// pyfrozenset_new ////////////////////
//@substitute: naming
static CYTHON_INLINE PyObject* __Pyx_PyFrozenSet_New(PyObject* it) {
diff -Nru cython-0.26.1/Cython/Utility/CMath.c cython-0.29.14/Cython/Utility/CMath.c
--- cython-0.26.1/Cython/Utility/CMath.c 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Utility/CMath.c 2018-09-22 14:18:56.000000000 +0000
@@ -73,8 +73,10 @@
switch (e) {
case 3:
t *= b;
+ CYTHON_FALLTHROUGH;
case 2:
t *= b;
+ CYTHON_FALLTHROUGH;
case 1:
return t;
case 0:
diff -Nru cython-0.26.1/Cython/Utility/Complex.c cython-0.29.14/Cython/Utility/Complex.c
--- cython-0.26.1/Cython/Utility/Complex.c 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Utility/Complex.c 2019-11-01 14:13:39.000000000 +0000
@@ -1,4 +1,5 @@
-/////////////// Header.proto.h_code ///////////////
+/////////////// Header.proto ///////////////
+//@proto_block: h_code
#if !defined(CYTHON_CCOMPLEX)
#if defined(__cplusplus)
@@ -49,7 +50,8 @@
#endif
-/////////////// Declarations.proto.complex_type_declarations ///////////////
+/////////////// Declarations.proto ///////////////
+//@proto_block: complex_type_declarations
#if CYTHON_CCOMPLEX
#ifdef __cplusplus
@@ -186,13 +188,13 @@
return {{type_name}}_from_parts(a.real / b.real, a.imag / b.imag);
} else {
{{real_type}} r = b.imag / b.real;
- {{real_type}} s = 1.0 / (b.real + b.imag * r);
+ {{real_type}} s = ({{real_type}})(1.0) / (b.real + b.imag * r);
return {{type_name}}_from_parts(
(a.real + a.imag * r) * s, (a.imag - a.real * r) * s);
}
} else {
{{real_type}} r = b.real / b.imag;
- {{real_type}} s = 1.0 / (b.imag + b.real * r);
+ {{real_type}} s = ({{real_type}})(1.0) / (b.imag + b.real * r);
return {{type_name}}_from_parts(
(a.real * r + a.imag) * s, (a.imag * r - a.real) * s);
}
@@ -251,7 +253,6 @@
case 1:
return a;
case 2:
- z = __Pyx_c_prod{{func_suffix}}(a, a);
return __Pyx_c_prod{{func_suffix}}(a, a);
case 3:
z = __Pyx_c_prod{{func_suffix}}(a, a);
@@ -273,7 +274,7 @@
theta = 0;
} else {
r = -a.real;
- theta = atan2{{m}}(0, -1);
+ theta = atan2{{m}}(0.0, -1.0);
}
} else {
r = __Pyx_c_abs{{func_suffix}}(a);
diff -Nru cython-0.26.1/Cython/Utility/Coroutine.c cython-0.29.14/Cython/Utility/Coroutine.c
--- cython-0.26.1/Cython/Utility/Coroutine.c 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Utility/Coroutine.c 2019-11-01 14:13:39.000000000 +0000
@@ -5,10 +5,17 @@
//////////////////// GeneratorYieldFrom ////////////////////
//@requires: Generator
+static void __PyxPyIter_CheckErrorAndDecref(PyObject *source) {
+ PyErr_Format(PyExc_TypeError,
+ "iter() returned non-iterator of type '%.100s'",
+ Py_TYPE(source)->tp_name);
+ Py_DECREF(source);
+}
+
static CYTHON_INLINE PyObject* __Pyx_Generator_Yield_From(__pyx_CoroutineObject *gen, PyObject *source) {
PyObject *source_gen, *retval;
#ifdef __Pyx_Coroutine_USED
- if (__Pyx_Coroutine_CheckExact(source)) {
+ if (__Pyx_Coroutine_Check(source)) {
// TODO: this should only happen for types.coroutine()ed generators, but we can't determine that here
Py_INCREF(source);
source_gen = source;
@@ -22,17 +29,23 @@
if (unlikely(!source_gen))
return NULL;
if (unlikely(!PyIter_Check(source_gen))) {
- PyErr_Format(PyExc_TypeError,
- "iter() returned non-iterator of type '%.100s'",
- Py_TYPE(source_gen)->tp_name);
- Py_DECREF(source_gen);
+ __PyxPyIter_CheckErrorAndDecref(source_gen);
return NULL;
}
} else
+ // CPython also allows non-iterable sequences to be iterated over
#endif
+ {
source_gen = PyObject_GetIter(source);
+ if (unlikely(!source_gen))
+ return NULL;
+ }
// source_gen is now the iterator, make the first next() call
+#if CYTHON_USE_TYPE_SLOTS
retval = Py_TYPE(source_gen)->tp_iternext(source_gen);
+#else
+ retval = PyIter_Next(source_gen);
+#endif
}
if (likely(retval)) {
gen->yieldfrom = source_gen;
@@ -45,115 +58,59 @@
//////////////////// CoroutineYieldFrom.proto ////////////////////
-#define __Pyx_Coroutine_Yield_From(gen, source) __Pyx__Coroutine_Yield_From(gen, source, 0)
-static CYTHON_INLINE PyObject* __Pyx__Coroutine_Yield_From(__pyx_CoroutineObject *gen, PyObject *source, int warn);
+static CYTHON_INLINE PyObject* __Pyx_Coroutine_Yield_From(__pyx_CoroutineObject *gen, PyObject *source);
//////////////////// CoroutineYieldFrom ////////////////////
//@requires: Coroutine
//@requires: GetAwaitIter
-static int __Pyx_WarnAIterDeprecation(PyObject *aiter) {
- int result;
-#if PY_MAJOR_VERSION >= 3
- result = PyErr_WarnFormat(
- PyExc_PendingDeprecationWarning, 1,
- "'%.100s' implements legacy __aiter__ protocol; "
- "__aiter__ should return an asynchronous "
- "iterator, not awaitable",
- Py_TYPE(aiter)->tp_name);
-#else
- result = PyErr_WarnEx(
- PyExc_PendingDeprecationWarning,
- "object implements legacy __aiter__ protocol; "
- "__aiter__ should return an asynchronous "
- "iterator, not awaitable",
- 1);
+static PyObject* __Pyx__Coroutine_Yield_From_Generic(__pyx_CoroutineObject *gen, PyObject *source) {
+ PyObject *retval;
+ PyObject *source_gen = __Pyx__Coroutine_GetAwaitableIter(source);
+ if (unlikely(!source_gen)) {
+ return NULL;
+ }
+ // source_gen is now the iterator, make the first next() call
+ if (__Pyx_Coroutine_Check(source_gen)) {
+ retval = __Pyx_Generator_Next(source_gen);
+ } else {
+#if CYTHON_USE_TYPE_SLOTS
+ retval = Py_TYPE(source_gen)->tp_iternext(source_gen);
+#else
+ retval = PyIter_Next(source_gen);
#endif
- return result != 0;
+ }
+ if (retval) {
+ gen->yieldfrom = source_gen;
+ return retval;
+ }
+ Py_DECREF(source_gen);
+ return NULL;
}
-static CYTHON_INLINE PyObject* __Pyx__Coroutine_Yield_From(__pyx_CoroutineObject *gen, PyObject *source, int warn) {
+static CYTHON_INLINE PyObject* __Pyx_Coroutine_Yield_From(__pyx_CoroutineObject *gen, PyObject *source) {
PyObject *retval;
- if (__Pyx_Coroutine_CheckExact(source)) {
- if (warn && unlikely(__Pyx_WarnAIterDeprecation(source))) {
- /* Warning was converted to an error. */
+ if (__Pyx_Coroutine_Check(source)) {
+ if (unlikely(((__pyx_CoroutineObject*)source)->yieldfrom)) {
+ PyErr_SetString(
+ PyExc_RuntimeError,
+ "coroutine is being awaited already");
return NULL;
}
retval = __Pyx_Generator_Next(source);
- if (retval) {
- Py_INCREF(source);
- gen->yieldfrom = source;
- return retval;
- }
+#ifdef __Pyx_AsyncGen_USED
+ // inlined "__pyx_PyAsyncGenASend" handling to avoid the series of generic calls
+ } else if (__pyx_PyAsyncGenASend_CheckExact(source)) {
+ retval = __Pyx_async_gen_asend_iternext(source);
+#endif
} else {
- PyObject *source_gen = __Pyx__Coroutine_GetAwaitableIter(source);
- if (unlikely(!source_gen))
- return NULL;
- if (warn && unlikely(__Pyx_WarnAIterDeprecation(source))) {
- /* Warning was converted to an error. */
- Py_DECREF(source_gen);
- return NULL;
- }
- // source_gen is now the iterator, make the first next() call
- if (__Pyx_Coroutine_CheckExact(source_gen)) {
- retval = __Pyx_Generator_Next(source_gen);
- } else {
- retval = Py_TYPE(source_gen)->tp_iternext(source_gen);
- }
- if (retval) {
- gen->yieldfrom = source_gen;
- return retval;
- }
- Py_DECREF(source_gen);
- }
- return NULL;
-}
-
-
-//////////////////// CoroutineAIterYieldFrom.proto ////////////////////
-
-static CYTHON_INLINE PyObject* __Pyx_Coroutine_AIter_Yield_From(__pyx_CoroutineObject *gen, PyObject *source);
-
-//////////////////// CoroutineAIterYieldFrom ////////////////////
-//@requires: CoroutineYieldFrom
-
-static CYTHON_INLINE PyObject* __Pyx_Coroutine_AIter_Yield_From(__pyx_CoroutineObject *gen, PyObject *source) {
-#if CYTHON_USE_ASYNC_SLOTS
- __Pyx_PyAsyncMethodsStruct* am = __Pyx_PyType_AsAsync(source);
- if (likely(am && am->am_anext)) {
- // Starting with CPython 3.5.2, __aiter__ should return
- // asynchronous iterators directly (not awaitables that
- // resolve to asynchronous iterators.)
- //
- // Therefore, we check if the object that was returned
- // from __aiter__ has an __anext__ method. If it does,
- // we return it directly as StopIteration result,
- // which avoids yielding.
- //
- // See http://bugs.python.org/issue27243 for more
- // details.
- PyErr_SetObject(PyExc_StopIteration, source);
- return NULL;
+ return __Pyx__Coroutine_Yield_From_Generic(gen, source);
}
-#endif
-#if PY_VERSION_HEX < 0x030500B2
- if (!__Pyx_PyType_AsAsync(source)) {
- #ifdef __Pyx_Coroutine_USED
- if (!__Pyx_Coroutine_CheckExact(source)) /* quickly rule out a likely case */
- #endif
- {
- // same as above in slow
- PyObject *method = __Pyx_PyObject_GetAttrStr(source, PYIDENT("__anext__"));
- if (method) {
- Py_DECREF(method);
- PyErr_SetObject(PyExc_StopIteration, source);
- return NULL;
- }
- PyErr_Clear();
- }
+ if (retval) {
+ Py_INCREF(source);
+ gen->yieldfrom = source;
}
-#endif
- return __Pyx__Coroutine_Yield_From(gen, source, 1);
+ return retval;
}
@@ -163,20 +120,56 @@
static PyObject *__Pyx__Coroutine_GetAwaitableIter(PyObject *o); /*proto*/
//////////////////// GetAwaitIter ////////////////////
-//@requires: ObjectHandling.c::PyObjectGetAttrStr
+//@requires: ObjectHandling.c::PyObjectGetMethod
//@requires: ObjectHandling.c::PyObjectCallNoArg
//@requires: ObjectHandling.c::PyObjectCallOneArg
static CYTHON_INLINE PyObject *__Pyx_Coroutine_GetAwaitableIter(PyObject *o) {
#ifdef __Pyx_Coroutine_USED
- if (__Pyx_Coroutine_CheckExact(o)) {
- Py_INCREF(o);
- return o;
+ if (__Pyx_Coroutine_Check(o)) {
+ return __Pyx_NewRef(o);
}
#endif
return __Pyx__Coroutine_GetAwaitableIter(o);
}
+
+static void __Pyx_Coroutine_AwaitableIterError(PyObject *source) {
+#if PY_VERSION_HEX >= 0x030600B3 || defined(_PyErr_FormatFromCause)
+ _PyErr_FormatFromCause(
+ PyExc_TypeError,
+ "'async for' received an invalid object "
+ "from __anext__: %.100s",
+ Py_TYPE(source)->tp_name);
+#elif PY_MAJOR_VERSION >= 3
+ PyObject *exc, *val, *val2, *tb;
+ assert(PyErr_Occurred());
+ PyErr_Fetch(&exc, &val, &tb);
+ PyErr_NormalizeException(&exc, &val, &tb);
+ if (tb != NULL) {
+ PyException_SetTraceback(val, tb);
+ Py_DECREF(tb);
+ }
+ Py_DECREF(exc);
+ assert(!PyErr_Occurred());
+ PyErr_Format(
+ PyExc_TypeError,
+ "'async for' received an invalid object "
+ "from __anext__: %.100s",
+ Py_TYPE(source)->tp_name);
+
+ PyErr_Fetch(&exc, &val2, &tb);
+ PyErr_NormalizeException(&exc, &val2, &tb);
+ Py_INCREF(val);
+ PyException_SetCause(val2, val);
+ PyException_SetContext(val2, val);
+ PyErr_Restore(exc, val2, tb);
+#else
+ // since Py2 does not have exception chaining, it's better to avoid shadowing exceptions there
+ source++;
+#endif
+}
+
// adapted from genobject.c in Py3.5
static PyObject *__Pyx__Coroutine_GetAwaitableIter(PyObject *obj) {
PyObject *res;
@@ -188,35 +181,32 @@
#endif
#if PY_VERSION_HEX >= 0x030500B2 || defined(PyCoro_CheckExact)
if (PyCoro_CheckExact(obj)) {
- Py_INCREF(obj);
- return obj;
+ return __Pyx_NewRef(obj);
} else
#endif
#if CYTHON_COMPILING_IN_CPYTHON && defined(CO_ITERABLE_COROUTINE)
if (PyGen_CheckExact(obj) && ((PyGenObject*)obj)->gi_code && ((PyCodeObject *)((PyGenObject*)obj)->gi_code)->co_flags & CO_ITERABLE_COROUTINE) {
// Python generator marked with "@types.coroutine" decorator
- Py_INCREF(obj);
- return obj;
+ return __Pyx_NewRef(obj);
} else
#endif
{
- PyObject *method = __Pyx_PyObject_GetAttrStr(obj, PYIDENT("__await__"));
- if (unlikely(!method)) goto slot_error;
- #if CYTHON_UNPACK_METHODS
- if (likely(PyMethod_Check(method))) {
- PyObject *self = PyMethod_GET_SELF(method);
- if (likely(self)) {
- PyObject *function = PyMethod_GET_FUNCTION(method);
- res = __Pyx_PyObject_CallOneArg(function, self);
- } else
- res = __Pyx_PyObject_CallNoArg(method);
- } else
- #endif
+ PyObject *method = NULL;
+ int is_method = __Pyx_PyObject_GetMethod(obj, PYIDENT("__await__"), &method);
+ if (likely(is_method)) {
+ res = __Pyx_PyObject_CallOneArg(method, obj);
+ } else if (likely(method)) {
res = __Pyx_PyObject_CallNoArg(method);
+ } else
+ goto slot_error;
Py_DECREF(method);
}
- if (unlikely(!res)) goto bad;
- if (!PyIter_Check(res)) {
+ if (unlikely(!res)) {
+ // surprisingly, CPython replaces the exception here...
+ __Pyx_Coroutine_AwaitableIterError(obj);
+ goto bad;
+ }
+ if (unlikely(!PyIter_Check(res))) {
PyErr_Format(PyExc_TypeError,
"__await__() returned non-iterator of type '%.100s'",
Py_TYPE(res)->tp_name);
@@ -224,7 +214,7 @@
} else {
int is_coroutine = 0;
#ifdef __Pyx_Coroutine_USED
- is_coroutine |= __Pyx_Coroutine_CheckExact(res);
+ is_coroutine |= __Pyx_Coroutine_Check(res);
#endif
#if PY_VERSION_HEX >= 0x030500B2 || defined(PyCoro_CheckExact)
is_coroutine |= PyCoro_CheckExact(res);
@@ -256,13 +246,7 @@
//@requires: GetAwaitIter
//@requires: ObjectHandling.c::PyObjectCallMethod0
-static CYTHON_INLINE PyObject *__Pyx_Coroutine_GetAsyncIter(PyObject *obj) {
-#if CYTHON_USE_ASYNC_SLOTS
- __Pyx_PyAsyncMethodsStruct* am = __Pyx_PyType_AsAsync(obj);
- if (likely(am && am->am_aiter)) {
- return (*am->am_aiter)(obj);
- }
-#endif
+static PyObject *__Pyx_Coroutine_GetAsyncIter_Generic(PyObject *obj) {
#if PY_VERSION_HEX < 0x030500B1
{
PyObject *iter = __Pyx_PyObject_CallMethod0(obj, PYIDENT("__aiter__"));
@@ -282,13 +266,26 @@
return NULL;
}
-static CYTHON_INLINE PyObject *__Pyx_Coroutine_AsyncIterNext(PyObject *obj) {
+
+static CYTHON_INLINE PyObject *__Pyx_Coroutine_GetAsyncIter(PyObject *obj) {
+#ifdef __Pyx_AsyncGen_USED
+ if (__Pyx_AsyncGen_CheckExact(obj)) {
+ return __Pyx_NewRef(obj);
+ }
+#endif
#if CYTHON_USE_ASYNC_SLOTS
- __Pyx_PyAsyncMethodsStruct* am = __Pyx_PyType_AsAsync(obj);
- if (likely(am && am->am_anext)) {
- return (*am->am_anext)(obj);
+ {
+ __Pyx_PyAsyncMethodsStruct* am = __Pyx_PyType_AsAsync(obj);
+ if (likely(am && am->am_aiter)) {
+ return (*am->am_aiter)(obj);
+ }
}
#endif
+ return __Pyx_Coroutine_GetAsyncIter_Generic(obj);
+}
+
+
+static PyObject *__Pyx__Coroutine_AsyncIterNext(PyObject *obj) {
#if PY_VERSION_HEX < 0x030500B1
{
PyObject *value = __Pyx_PyObject_CallMethod0(obj, PYIDENT("__anext__"));
@@ -304,59 +301,136 @@
}
+static CYTHON_INLINE PyObject *__Pyx_Coroutine_AsyncIterNext(PyObject *obj) {
+#ifdef __Pyx_AsyncGen_USED
+ if (__Pyx_AsyncGen_CheckExact(obj)) {
+ return __Pyx_async_gen_anext(obj);
+ }
+#endif
+#if CYTHON_USE_ASYNC_SLOTS
+ {
+ __Pyx_PyAsyncMethodsStruct* am = __Pyx_PyType_AsAsync(obj);
+ if (likely(am && am->am_anext)) {
+ return (*am->am_anext)(obj);
+ }
+ }
+#endif
+ return __Pyx__Coroutine_AsyncIterNext(obj);
+}
+
+
//////////////////// pep479.proto ////////////////////
-static void __Pyx_Generator_Replace_StopIteration(void); /*proto*/
+static void __Pyx_Generator_Replace_StopIteration(int in_async_gen); /*proto*/
//////////////////// pep479 ////////////////////
//@requires: Exceptions.c::GetException
-static void __Pyx_Generator_Replace_StopIteration(void) {
- PyObject *exc, *val, *tb;
- // Chain exceptions by moving StopIteration to exc_info before creating the RuntimeError.
- // In Py2.x, no chaining happens, but the exception still stays visible in exc_info.
+static void __Pyx_Generator_Replace_StopIteration(CYTHON_UNUSED int in_async_gen) {
+ PyObject *exc, *val, *tb, *cur_exc;
__Pyx_PyThreadState_declare
+ #ifdef __Pyx_StopAsyncIteration_USED
+ int is_async_stopiteration = 0;
+ #endif
+
+ cur_exc = PyErr_Occurred();
+ if (likely(!__Pyx_PyErr_GivenExceptionMatches(cur_exc, PyExc_StopIteration))) {
+ #ifdef __Pyx_StopAsyncIteration_USED
+ if (in_async_gen && unlikely(__Pyx_PyErr_GivenExceptionMatches(cur_exc, __Pyx_PyExc_StopAsyncIteration))) {
+ is_async_stopiteration = 1;
+ } else
+ #endif
+ return;
+ }
+
__Pyx_PyThreadState_assign
+ // Chain exceptions by moving Stop(Async)Iteration to exc_info before creating the RuntimeError.
+ // In Py2.x, no chaining happens, but the exception still stays visible in exc_info.
__Pyx_GetException(&exc, &val, &tb);
Py_XDECREF(exc);
Py_XDECREF(val);
Py_XDECREF(tb);
- PyErr_SetString(PyExc_RuntimeError, "generator raised StopIteration");
+ PyErr_SetString(PyExc_RuntimeError,
+ #ifdef __Pyx_StopAsyncIteration_USED
+ is_async_stopiteration ? "async generator raised StopAsyncIteration" :
+ in_async_gen ? "async generator raised StopIteration" :
+ #endif
+ "generator raised StopIteration");
}
//////////////////// CoroutineBase.proto ////////////////////
+//@substitute: naming
-typedef PyObject *(*__pyx_coroutine_body_t)(PyObject *, PyObject *);
+typedef PyObject *(*__pyx_coroutine_body_t)(PyObject *, PyThreadState *, PyObject *);
+#if CYTHON_USE_EXC_INFO_STACK
+// See https://bugs.python.org/issue25612
+#define __Pyx_ExcInfoStruct _PyErr_StackItem
+#else
+// Minimal replacement struct for Py<3.7, without the Py3.7 exception state stack.
typedef struct {
- PyObject_HEAD
- __pyx_coroutine_body_t body;
- PyObject *closure;
PyObject *exc_type;
PyObject *exc_value;
PyObject *exc_traceback;
+} __Pyx_ExcInfoStruct;
+#endif
+
+typedef struct {
+ PyObject_HEAD
+ __pyx_coroutine_body_t body;
+ PyObject *closure;
+ __Pyx_ExcInfoStruct gi_exc_state;
PyObject *gi_weakreflist;
PyObject *classobj;
PyObject *yieldfrom;
PyObject *gi_name;
PyObject *gi_qualname;
PyObject *gi_modulename;
+ PyObject *gi_code;
int resume_label;
// using T_BOOL for property below requires char value
char is_running;
} __pyx_CoroutineObject;
static __pyx_CoroutineObject *__Pyx__Coroutine_New(
- PyTypeObject *type, __pyx_coroutine_body_t body, PyObject *closure,
+ PyTypeObject *type, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure,
PyObject *name, PyObject *qualname, PyObject *module_name); /*proto*/
+
+static __pyx_CoroutineObject *__Pyx__Coroutine_NewInit(
+ __pyx_CoroutineObject *gen, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure,
+ PyObject *name, PyObject *qualname, PyObject *module_name); /*proto*/
+
+static CYTHON_INLINE void __Pyx_Coroutine_ExceptionClear(__Pyx_ExcInfoStruct *self);
static int __Pyx_Coroutine_clear(PyObject *self); /*proto*/
+static PyObject *__Pyx_Coroutine_Send(PyObject *self, PyObject *value); /*proto*/
+static PyObject *__Pyx_Coroutine_Close(PyObject *self); /*proto*/
+static PyObject *__Pyx_Coroutine_Throw(PyObject *gen, PyObject *args); /*proto*/
+
+// macros for exception state swapping instead of inline functions to make use of the local thread state context
+#if CYTHON_USE_EXC_INFO_STACK
+#define __Pyx_Coroutine_SwapException(self)
+#define __Pyx_Coroutine_ResetAndClearException(self) __Pyx_Coroutine_ExceptionClear(&(self)->gi_exc_state)
+#else
+#define __Pyx_Coroutine_SwapException(self) { \
+ __Pyx_ExceptionSwap(&(self)->gi_exc_state.exc_type, &(self)->gi_exc_state.exc_value, &(self)->gi_exc_state.exc_traceback); \
+ __Pyx_Coroutine_ResetFrameBackpointer(&(self)->gi_exc_state); \
+ }
+#define __Pyx_Coroutine_ResetAndClearException(self) { \
+ __Pyx_ExceptionReset((self)->gi_exc_state.exc_type, (self)->gi_exc_state.exc_value, (self)->gi_exc_state.exc_traceback); \
+ (self)->gi_exc_state.exc_type = (self)->gi_exc_state.exc_value = (self)->gi_exc_state.exc_traceback = NULL; \
+ }
+#endif
-#if 1 || PY_VERSION_HEX < 0x030300B0
-static int __Pyx_PyGen_FetchStopIterationValue(PyObject **pvalue); /*proto*/
+#if CYTHON_FAST_THREAD_STATE
+#define __Pyx_PyGen_FetchStopIterationValue(pvalue) \
+ __Pyx_PyGen__FetchStopIterationValue($local_tstate_cname, pvalue)
#else
-#define __Pyx_PyGen_FetchStopIterationValue(pvalue) PyGen_FetchStopIterationValue(pvalue)
+#define __Pyx_PyGen_FetchStopIterationValue(pvalue) \
+ __Pyx_PyGen__FetchStopIterationValue(__Pyx_PyThreadState_Current, pvalue)
#endif
+static int __Pyx_PyGen__FetchStopIterationValue(PyThreadState *tstate, PyObject **pvalue); /*proto*/
+static CYTHON_INLINE void __Pyx_Coroutine_ResetFrameBackpointer(__Pyx_ExcInfoStruct *exc_state); /*proto*/
//////////////////// Coroutine.proto ////////////////////
@@ -365,13 +439,24 @@
static PyTypeObject *__pyx_CoroutineType = 0;
static PyTypeObject *__pyx_CoroutineAwaitType = 0;
#define __Pyx_Coroutine_CheckExact(obj) (Py_TYPE(obj) == __pyx_CoroutineType)
+// __Pyx_Coroutine_Check(obj): see override for IterableCoroutine below
+#define __Pyx_Coroutine_Check(obj) __Pyx_Coroutine_CheckExact(obj)
+#define __Pyx_CoroutineAwait_CheckExact(obj) (Py_TYPE(obj) == __pyx_CoroutineAwaitType)
-#define __Pyx_Coroutine_New(body, closure, name, qualname, module_name) \
- __Pyx__Coroutine_New(__pyx_CoroutineType, body, closure, name, qualname, module_name)
+#define __Pyx_Coroutine_New(body, code, closure, name, qualname, module_name) \
+ __Pyx__Coroutine_New(__pyx_CoroutineType, body, code, closure, name, qualname, module_name)
static int __pyx_Coroutine_init(void); /*proto*/
static PyObject *__Pyx__Coroutine_await(PyObject *coroutine); /*proto*/
+typedef struct {
+ PyObject_HEAD
+ PyObject *coroutine;
+} __pyx_CoroutineAwaitObject;
+
+static PyObject *__Pyx_CoroutineAwait_Close(__pyx_CoroutineAwaitObject *self, PyObject *arg); /*proto*/
+static PyObject *__Pyx_CoroutineAwait_Throw(__pyx_CoroutineAwaitObject *self, PyObject *args); /*proto*/
+
//////////////////// Generator.proto ////////////////////
@@ -379,19 +464,25 @@
static PyTypeObject *__pyx_GeneratorType = 0;
#define __Pyx_Generator_CheckExact(obj) (Py_TYPE(obj) == __pyx_GeneratorType)
-#define __Pyx_Generator_New(body, closure, name, qualname, module_name) \
- __Pyx__Coroutine_New(__pyx_GeneratorType, body, closure, name, qualname, module_name)
+#define __Pyx_Generator_New(body, code, closure, name, qualname, module_name) \
+ __Pyx__Coroutine_New(__pyx_GeneratorType, body, code, closure, name, qualname, module_name)
static PyObject *__Pyx_Generator_Next(PyObject *self);
static int __pyx_Generator_init(void); /*proto*/
+//////////////////// AsyncGen ////////////////////
+//@requires: AsyncGen.c::AsyncGenerator
+// -> empty, only delegates to separate file
+
+
//////////////////// CoroutineBase ////////////////////
//@substitute: naming
//@requires: Exceptions.c::PyErrFetchRestore
//@requires: Exceptions.c::PyThreadStateGet
//@requires: Exceptions.c::SwapException
//@requires: Exceptions.c::RaiseException
+//@requires: Exceptions.c::SaveResetException
//@requires: ObjectHandling.c::PyObjectCallMethod1
//@requires: ObjectHandling.c::PyObjectGetAttrStr
//@requires: CommonStructures.c::FetchCommonType
@@ -399,10 +490,6 @@
#include
#include
-static PyObject *__Pyx_Coroutine_Send(PyObject *self, PyObject *value);
-static PyObject *__Pyx_Coroutine_Close(PyObject *self);
-static PyObject *__Pyx_Coroutine_Throw(PyObject *gen, PyObject *args);
-
#define __Pyx_Coroutine_Undelegate(gen) Py_CLEAR((gen)->yieldfrom)
// If StopIteration exception is set, fetches its 'value'
@@ -411,12 +498,9 @@
// Returns 0 if no exception or StopIteration is set.
// If any other exception is set, returns -1 and leaves
// pvalue unchanged.
-#if 1 || PY_VERSION_HEX < 0x030300B0
-static int __Pyx_PyGen_FetchStopIterationValue(PyObject **pvalue) {
+static int __Pyx_PyGen__FetchStopIterationValue(CYTHON_UNUSED PyThreadState *$local_tstate_cname, PyObject **pvalue) {
PyObject *et, *ev, *tb;
PyObject *value = NULL;
- __Pyx_PyThreadState_declare
- __Pyx_PyThreadState_assign
__Pyx_ErrFetch(&et, &ev, &tb);
@@ -457,7 +541,7 @@
}
Py_DECREF(ev);
}
- else if (!PyObject_TypeCheck(ev, (PyTypeObject*)PyExc_StopIteration)) {
+ else if (!__Pyx_TypeCheck(ev, (PyTypeObject*)PyExc_StopIteration)) {
// 'steal' reference to ev
value = ev;
}
@@ -467,7 +551,7 @@
*pvalue = value;
return 0;
}
- } else if (!PyErr_GivenExceptionMatches(et, PyExc_StopIteration)) {
+ } else if (!__Pyx_PyErr_GivenExceptionMatches(et, PyExc_StopIteration)) {
__Pyx_ErrRestore(et, ev, tb);
return -1;
}
@@ -503,107 +587,203 @@
*pvalue = value;
return 0;
}
-#endif
static CYTHON_INLINE
-void __Pyx_Coroutine_ExceptionClear(__pyx_CoroutineObject *self) {
- PyObject *exc_type = self->exc_type;
- PyObject *exc_value = self->exc_value;
- PyObject *exc_traceback = self->exc_traceback;
-
- self->exc_type = NULL;
- self->exc_value = NULL;
- self->exc_traceback = NULL;
-
- Py_XDECREF(exc_type);
- Py_XDECREF(exc_value);
- Py_XDECREF(exc_traceback);
+void __Pyx_Coroutine_ExceptionClear(__Pyx_ExcInfoStruct *exc_state) {
+ PyObject *t, *v, *tb;
+ t = exc_state->exc_type;
+ v = exc_state->exc_value;
+ tb = exc_state->exc_traceback;
+
+ exc_state->exc_type = NULL;
+ exc_state->exc_value = NULL;
+ exc_state->exc_traceback = NULL;
+
+ Py_XDECREF(t);
+ Py_XDECREF(v);
+ Py_XDECREF(tb);
}
-static CYTHON_INLINE
-int __Pyx_Coroutine_CheckRunning(__pyx_CoroutineObject *gen) {
- if (unlikely(gen->is_running)) {
- PyErr_SetString(PyExc_ValueError,
- "generator already executing");
- return 1;
+#define __Pyx_Coroutine_AlreadyRunningError(gen) (__Pyx__Coroutine_AlreadyRunningError(gen), (PyObject*)NULL)
+static void __Pyx__Coroutine_AlreadyRunningError(CYTHON_UNUSED __pyx_CoroutineObject *gen) {
+ const char *msg;
+ if ((0)) {
+ #ifdef __Pyx_Coroutine_USED
+ } else if (__Pyx_Coroutine_Check((PyObject*)gen)) {
+ msg = "coroutine already executing";
+ #endif
+ #ifdef __Pyx_AsyncGen_USED
+ } else if (__Pyx_AsyncGen_CheckExact((PyObject*)gen)) {
+ msg = "async generator already executing";
+ #endif
+ } else {
+ msg = "generator already executing";
}
- return 0;
+ PyErr_SetString(PyExc_ValueError, msg);
}
-static CYTHON_INLINE
-PyObject *__Pyx_Coroutine_SendEx(__pyx_CoroutineObject *self, PyObject *value) {
- PyObject *retval;
+#define __Pyx_Coroutine_NotStartedError(gen) (__Pyx__Coroutine_NotStartedError(gen), (PyObject*)NULL)
+static void __Pyx__Coroutine_NotStartedError(CYTHON_UNUSED PyObject *gen) {
+ const char *msg;
+ if ((0)) {
+ #ifdef __Pyx_Coroutine_USED
+ } else if (__Pyx_Coroutine_Check(gen)) {
+ msg = "can't send non-None value to a just-started coroutine";
+ #endif
+ #ifdef __Pyx_AsyncGen_USED
+ } else if (__Pyx_AsyncGen_CheckExact(gen)) {
+ msg = "can't send non-None value to a just-started async generator";
+ #endif
+ } else {
+ msg = "can't send non-None value to a just-started generator";
+ }
+ PyErr_SetString(PyExc_TypeError, msg);
+}
+
+#define __Pyx_Coroutine_AlreadyTerminatedError(gen, value, closing) (__Pyx__Coroutine_AlreadyTerminatedError(gen, value, closing), (PyObject*)NULL)
+static void __Pyx__Coroutine_AlreadyTerminatedError(CYTHON_UNUSED PyObject *gen, PyObject *value, CYTHON_UNUSED int closing) {
+ #ifdef __Pyx_Coroutine_USED
+ if (!closing && __Pyx_Coroutine_Check(gen)) {
+ // `self` is an exhausted coroutine: raise an error,
+ // except when called from gen_close(), which should
+ // always be a silent method.
+ PyErr_SetString(PyExc_RuntimeError, "cannot reuse already awaited coroutine");
+ } else
+ #endif
+ if (value) {
+ // `gen` is an exhausted generator:
+ // only set exception if called from send().
+ #ifdef __Pyx_AsyncGen_USED
+ if (__Pyx_AsyncGen_CheckExact(gen))
+ PyErr_SetNone(__Pyx_PyExc_StopAsyncIteration);
+ else
+ #endif
+ PyErr_SetNone(PyExc_StopIteration);
+ }
+}
+
+static
+PyObject *__Pyx_Coroutine_SendEx(__pyx_CoroutineObject *self, PyObject *value, int closing) {
__Pyx_PyThreadState_declare
+ PyThreadState *tstate;
+ __Pyx_ExcInfoStruct *exc_state;
+ PyObject *retval;
assert(!self->is_running);
if (unlikely(self->resume_label == 0)) {
if (unlikely(value && value != Py_None)) {
- PyErr_SetString(PyExc_TypeError,
- "can't send non-None value to a "
- "just-started generator");
- return NULL;
+ return __Pyx_Coroutine_NotStartedError((PyObject*)self);
}
}
if (unlikely(self->resume_label == -1)) {
- PyErr_SetNone(PyExc_StopIteration);
- return NULL;
+ return __Pyx_Coroutine_AlreadyTerminatedError((PyObject*)self, value, closing);
}
+#if CYTHON_FAST_THREAD_STATE
__Pyx_PyThreadState_assign
- if (value) {
-#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_PYSTON
- // FIXME: what to do in PyPy?
+ tstate = $local_tstate_cname;
#else
+ tstate = __Pyx_PyThreadState_Current;
+#endif
+
+ // Traceback/Frame rules pre-Py3.7:
+ // - on entry, save external exception state in self->gi_exc_state, restore it on exit
+ // - on exit, keep internally generated exceptions in self->gi_exc_state, clear everything else
+ // - on entry, set "f_back" pointer of internal exception traceback to (current) outer call frame
+ // - on exit, clear "f_back" of internal exception traceback
+ // - do not touch external frames and tracebacks
+
+ // Traceback/Frame rules for Py3.7+ (CYTHON_USE_EXC_INFO_STACK):
+ // - on entry, push internal exception state in self->gi_exc_state on the exception stack
+ // - on exit, keep internally generated exceptions in self->gi_exc_state, clear everything else
+ // - on entry, set "f_back" pointer of internal exception traceback to (current) outer call frame
+ // - on exit, clear "f_back" of internal exception traceback
+ // - do not touch external frames and tracebacks
+
+ exc_state = &self->gi_exc_state;
+ if (exc_state->exc_type) {
+ #if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_PYSTON
+ // FIXME: what to do in PyPy?
+ #else
// Generators always return to their most recent caller, not
// necessarily their creator.
- if (self->exc_traceback) {
- PyTracebackObject *tb = (PyTracebackObject *) self->exc_traceback;
+ if (exc_state->exc_traceback) {
+ PyTracebackObject *tb = (PyTracebackObject *) exc_state->exc_traceback;
PyFrameObject *f = tb->tb_frame;
- Py_XINCREF($local_tstate_cname->frame);
+ Py_XINCREF(tstate->frame);
assert(f->f_back == NULL);
- f->f_back = $local_tstate_cname->frame;
+ f->f_back = tstate->frame;
}
-#endif
- __Pyx_ExceptionSwap(&self->exc_type, &self->exc_value,
- &self->exc_traceback);
+ #endif
+ }
+
+#if CYTHON_USE_EXC_INFO_STACK
+ // See https://bugs.python.org/issue25612
+ exc_state->previous_item = tstate->exc_info;
+ tstate->exc_info = exc_state;
+#else
+ if (exc_state->exc_type) {
+ // We were in an except handler when we left,
+ // restore the exception state which was put aside.
+ __Pyx_ExceptionSwap(&exc_state->exc_type, &exc_state->exc_value, &exc_state->exc_traceback);
+ // self->exc_* now holds the exception state of the caller
} else {
- __Pyx_Coroutine_ExceptionClear(self);
+ // save away the exception state of the caller
+ __Pyx_Coroutine_ExceptionClear(exc_state);
+ __Pyx_ExceptionSave(&exc_state->exc_type, &exc_state->exc_value, &exc_state->exc_traceback);
}
+#endif
self->is_running = 1;
- retval = self->body((PyObject *) self, value);
+ retval = self->body((PyObject *) self, tstate, value);
self->is_running = 0;
- if (retval) {
- __Pyx_ExceptionSwap(&self->exc_type, &self->exc_value,
- &self->exc_traceback);
+#if CYTHON_USE_EXC_INFO_STACK
+ // See https://bugs.python.org/issue25612
+ exc_state = &self->gi_exc_state;
+ tstate->exc_info = exc_state->previous_item;
+ exc_state->previous_item = NULL;
+ // Cut off the exception frame chain so that we can reconnect it on re-entry above.
+ __Pyx_Coroutine_ResetFrameBackpointer(exc_state);
+#endif
+
+ return retval;
+}
+
+static CYTHON_INLINE void __Pyx_Coroutine_ResetFrameBackpointer(__Pyx_ExcInfoStruct *exc_state) {
+ // Don't keep the reference to f_back any longer than necessary. It
+ // may keep a chain of frames alive or it could create a reference
+ // cycle.
+ PyObject *exc_tb = exc_state->exc_traceback;
+
+ if (likely(exc_tb)) {
#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_PYSTON
- // FIXME: what to do in PyPy?
+ // FIXME: what to do in PyPy?
#else
- // Don't keep the reference to f_back any longer than necessary. It
- // may keep a chain of frames alive or it could create a reference
- // cycle.
- if (self->exc_traceback) {
- PyTracebackObject *tb = (PyTracebackObject *) self->exc_traceback;
- PyFrameObject *f = tb->tb_frame;
- Py_CLEAR(f->f_back);
- }
+ PyTracebackObject *tb = (PyTracebackObject *) exc_tb;
+ PyFrameObject *f = tb->tb_frame;
+ Py_CLEAR(f->f_back);
#endif
- } else {
- __Pyx_Coroutine_ExceptionClear(self);
}
-
- return retval;
}
static CYTHON_INLINE
-PyObject *__Pyx_Coroutine_MethodReturn(PyObject *retval) {
- if (unlikely(!retval && !PyErr_Occurred())) {
- // method call must not terminate with NULL without setting an exception
- PyErr_SetNone(PyExc_StopIteration);
+PyObject *__Pyx_Coroutine_MethodReturn(CYTHON_UNUSED PyObject* gen, PyObject *retval) {
+ if (unlikely(!retval)) {
+ __Pyx_PyThreadState_declare
+ __Pyx_PyThreadState_assign
+ if (!__Pyx_PyErr_Occurred()) {
+ // method call must not terminate with NULL without setting an exception
+ PyObject *exc = PyExc_StopIteration;
+ #ifdef __Pyx_AsyncGen_USED
+ if (__Pyx_AsyncGen_CheckExact(gen))
+ exc = __Pyx_PyExc_StopAsyncIteration;
+ #endif
+ __Pyx_PyErr_SetNone(exc);
+ }
}
return retval;
}
@@ -613,9 +793,9 @@
PyObject *ret;
PyObject *val = NULL;
__Pyx_Coroutine_Undelegate(gen);
- __Pyx_PyGen_FetchStopIterationValue(&val);
+ __Pyx_PyGen__FetchStopIterationValue(__Pyx_PyThreadState_Current, &val);
// val == NULL on failure => pass on exception
- ret = __Pyx_Coroutine_SendEx(gen, val);
+ ret = __Pyx_Coroutine_SendEx(gen, val, 0);
Py_XDECREF(val);
return ret;
}
@@ -624,8 +804,8 @@
PyObject *retval;
__pyx_CoroutineObject *gen = (__pyx_CoroutineObject*) self;
PyObject *yf = gen->yieldfrom;
- if (unlikely(__Pyx_Coroutine_CheckRunning(gen)))
- return NULL;
+ if (unlikely(gen->is_running))
+ return __Pyx_Coroutine_AlreadyRunningError(gen);
if (yf) {
PyObject *ret;
// FIXME: does this really need an INCREF() ?
@@ -637,10 +817,27 @@
} else
#endif
#ifdef __Pyx_Coroutine_USED
- if (__Pyx_Coroutine_CheckExact(yf)) {
+ if (__Pyx_Coroutine_Check(yf)) {
ret = __Pyx_Coroutine_Send(yf, value);
} else
#endif
+ #ifdef __Pyx_AsyncGen_USED
+ if (__pyx_PyAsyncGenASend_CheckExact(yf)) {
+ ret = __Pyx_async_gen_asend_send(yf, value);
+ } else
+ #endif
+ #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03030000 && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3)
+ // _PyGen_Send() is not exported before Py3.6
+ if (PyGen_CheckExact(yf)) {
+ ret = _PyGen_Send((PyGenObject*)yf, value == Py_None ? NULL : value);
+ } else
+ #endif
+ #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03050000 && defined(PyCoro_CheckExact) && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3)
+ // _PyGen_Send() is not exported before Py3.6
+ if (PyCoro_CheckExact(yf)) {
+ ret = _PyGen_Send((PyGenObject*)yf, value == Py_None ? NULL : value);
+ } else
+ #endif
{
if (value == Py_None)
ret = Py_TYPE(yf)->tp_iternext(yf);
@@ -654,9 +851,9 @@
}
retval = __Pyx_Coroutine_FinishDelegation(gen);
} else {
- retval = __Pyx_Coroutine_SendEx(gen, value);
+ retval = __Pyx_Coroutine_SendEx(gen, value, 0);
}
- return __Pyx_Coroutine_MethodReturn(retval);
+ return __Pyx_Coroutine_MethodReturn(self, retval);
}
// This helper function is used by gen_close and gen_throw to
@@ -673,11 +870,26 @@
} else
#endif
#ifdef __Pyx_Coroutine_USED
- if (__Pyx_Coroutine_CheckExact(yf)) {
+ if (__Pyx_Coroutine_Check(yf)) {
retval = __Pyx_Coroutine_Close(yf);
if (!retval)
return -1;
} else
+ if (__Pyx_CoroutineAwait_CheckExact(yf)) {
+ retval = __Pyx_CoroutineAwait_Close((__pyx_CoroutineAwaitObject*)yf, NULL);
+ if (!retval)
+ return -1;
+ } else
+ #endif
+ #ifdef __Pyx_AsyncGen_USED
+ if (__pyx_PyAsyncGenASend_CheckExact(yf)) {
+ retval = __Pyx_async_gen_asend_close(yf, NULL);
+ // cannot fail
+ } else
+ if (__pyx_PyAsyncGenAThrow_CheckExact(yf)) {
+ retval = __Pyx_async_gen_athrow_close(yf, NULL);
+ // cannot fail
+ } else
#endif
{
PyObject *meth;
@@ -703,8 +915,8 @@
static PyObject *__Pyx_Generator_Next(PyObject *self) {
__pyx_CoroutineObject *gen = (__pyx_CoroutineObject*) self;
PyObject *yf = gen->yieldfrom;
- if (unlikely(__Pyx_Coroutine_CheckRunning(gen)))
- return NULL;
+ if (unlikely(gen->is_running))
+ return __Pyx_Coroutine_AlreadyRunningError(gen);
if (yf) {
PyObject *ret;
// FIXME: does this really need an INCREF() ?
@@ -716,6 +928,17 @@
ret = __Pyx_Generator_Next(yf);
} else
#endif
+ #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03030000 && (defined(__linux__) || PY_VERSION_HEX >= 0x030600B3)
+ // _PyGen_Send() is not exported before Py3.6
+ if (PyGen_CheckExact(yf)) {
+ ret = _PyGen_Send((PyGenObject*)yf, NULL);
+ } else
+ #endif
+ #ifdef __Pyx_Coroutine_USED
+ if (__Pyx_Coroutine_Check(yf)) {
+ ret = __Pyx_Coroutine_Send(yf, Py_None);
+ } else
+ #endif
ret = Py_TYPE(yf)->tp_iternext(yf);
gen->is_running = 0;
//Py_DECREF(yf);
@@ -724,7 +947,11 @@
}
return __Pyx_Coroutine_FinishDelegation(gen);
}
- return __Pyx_Coroutine_SendEx(gen, Py_None);
+ return __Pyx_Coroutine_SendEx(gen, Py_None, 0);
+}
+
+static PyObject *__Pyx_Coroutine_Close_Method(PyObject *self, CYTHON_UNUSED PyObject *arg) {
+ return __Pyx_Coroutine_Close(self);
}
static PyObject *__Pyx_Coroutine_Close(PyObject *self) {
@@ -733,8 +960,8 @@
PyObject *yf = gen->yieldfrom;
int err = 0;
- if (unlikely(__Pyx_Coroutine_CheckRunning(gen)))
- return NULL;
+ if (unlikely(gen->is_running))
+ return __Pyx_Coroutine_AlreadyRunningError(gen);
if (yf) {
Py_INCREF(yf);
@@ -744,20 +971,31 @@
}
if (err == 0)
PyErr_SetNone(PyExc_GeneratorExit);
- retval = __Pyx_Coroutine_SendEx(gen, NULL);
- if (retval) {
+ retval = __Pyx_Coroutine_SendEx(gen, NULL, 1);
+ if (unlikely(retval)) {
+ const char *msg;
Py_DECREF(retval);
- PyErr_SetString(PyExc_RuntimeError,
- "generator ignored GeneratorExit");
+ if ((0)) {
+ #ifdef __Pyx_Coroutine_USED
+ } else if (__Pyx_Coroutine_Check(self)) {
+ msg = "coroutine ignored GeneratorExit";
+ #endif
+ #ifdef __Pyx_AsyncGen_USED
+ } else if (__Pyx_AsyncGen_CheckExact(self)) {
+#if PY_VERSION_HEX < 0x03060000
+ msg = "async generator ignored GeneratorExit - might require Python 3.6+ finalisation (PEP 525)";
+#else
+ msg = "async generator ignored GeneratorExit";
+#endif
+ #endif
+ } else {
+ msg = "generator ignored GeneratorExit";
+ }
+ PyErr_SetString(PyExc_RuntimeError, msg);
return NULL;
}
raised_exception = PyErr_Occurred();
- if (!raised_exception
- || raised_exception == PyExc_StopIteration
- || raised_exception == PyExc_GeneratorExit
- || PyErr_GivenExceptionMatches(raised_exception, PyExc_GeneratorExit)
- || PyErr_GivenExceptionMatches(raised_exception, PyExc_StopIteration))
- {
+ if (likely(!raised_exception || __Pyx_PyErr_GivenExceptionMatches2(raised_exception, PyExc_GeneratorExit, PyExc_StopIteration))) {
// ignore these errors
if (raised_exception) PyErr_Clear();
Py_INCREF(Py_None);
@@ -766,42 +1004,43 @@
return NULL;
}
-static PyObject *__Pyx_Coroutine_Throw(PyObject *self, PyObject *args) {
+static PyObject *__Pyx__Coroutine_Throw(PyObject *self, PyObject *typ, PyObject *val, PyObject *tb,
+ PyObject *args, int close_on_genexit) {
__pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self;
- PyObject *typ;
- PyObject *tb = NULL;
- PyObject *val = NULL;
PyObject *yf = gen->yieldfrom;
- if (!PyArg_UnpackTuple(args, (char *)"throw", 1, 3, &typ, &val, &tb))
- return NULL;
-
- if (unlikely(__Pyx_Coroutine_CheckRunning(gen)))
- return NULL;
+ if (unlikely(gen->is_running))
+ return __Pyx_Coroutine_AlreadyRunningError(gen);
if (yf) {
PyObject *ret;
Py_INCREF(yf);
- if (PyErr_GivenExceptionMatches(typ, PyExc_GeneratorExit)) {
+ if (__Pyx_PyErr_GivenExceptionMatches(typ, PyExc_GeneratorExit) && close_on_genexit) {
+ // Asynchronous generators *should not* be closed right away.
+ // We have to allow some awaits to work it through, hence the
+ // `close_on_genexit` parameter here.
int err = __Pyx_Coroutine_CloseIter(gen, yf);
Py_DECREF(yf);
__Pyx_Coroutine_Undelegate(gen);
if (err < 0)
- return __Pyx_Coroutine_MethodReturn(__Pyx_Coroutine_SendEx(gen, NULL));
+ return __Pyx_Coroutine_MethodReturn(self, __Pyx_Coroutine_SendEx(gen, NULL, 0));
goto throw_here;
}
gen->is_running = 1;
+ if (0
#ifdef __Pyx_Generator_USED
- if (__Pyx_Generator_CheckExact(yf)) {
- ret = __Pyx_Coroutine_Throw(yf, args);
- } else
+ || __Pyx_Generator_CheckExact(yf)
#endif
#ifdef __Pyx_Coroutine_USED
- if (__Pyx_Coroutine_CheckExact(yf)) {
- ret = __Pyx_Coroutine_Throw(yf, args);
- } else
+ || __Pyx_Coroutine_Check(yf)
#endif
- {
+ ) {
+ ret = __Pyx__Coroutine_Throw(yf, typ, val, tb, args, close_on_genexit);
+ #ifdef __Pyx_Coroutine_USED
+ } else if (__Pyx_CoroutineAwait_CheckExact(yf)) {
+ ret = __Pyx__Coroutine_Throw(((__pyx_CoroutineAwaitObject*)yf)->coroutine, typ, val, tb, args, close_on_genexit);
+ #endif
+ } else {
PyObject *meth = __Pyx_PyObject_GetAttrStr(yf, PYIDENT("throw"));
if (unlikely(!meth)) {
Py_DECREF(yf);
@@ -814,7 +1053,12 @@
gen->is_running = 0;
goto throw_here;
}
- ret = PyObject_CallObject(meth, args);
+ if (likely(args)) {
+ ret = PyObject_CallObject(meth, args);
+ } else {
+ // "tb" or even "val" might be NULL, but that also correctly terminates the argument list
+ ret = PyObject_CallFunctionObjArgs(meth, typ, val, tb, NULL);
+ }
Py_DECREF(meth);
}
gen->is_running = 0;
@@ -822,23 +1066,36 @@
if (!ret) {
ret = __Pyx_Coroutine_FinishDelegation(gen);
}
- return __Pyx_Coroutine_MethodReturn(ret);
+ return __Pyx_Coroutine_MethodReturn(self, ret);
}
throw_here:
__Pyx_Raise(typ, val, tb, NULL);
- return __Pyx_Coroutine_MethodReturn(__Pyx_Coroutine_SendEx(gen, NULL));
+ return __Pyx_Coroutine_MethodReturn(self, __Pyx_Coroutine_SendEx(gen, NULL, 0));
}
-static int __Pyx_Coroutine_traverse(PyObject *self, visitproc visit, void *arg) {
- __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self;
+static PyObject *__Pyx_Coroutine_Throw(PyObject *self, PyObject *args) {
+ PyObject *typ;
+ PyObject *val = NULL;
+ PyObject *tb = NULL;
+
+ if (!PyArg_UnpackTuple(args, (char *)"throw", 1, 3, &typ, &val, &tb))
+ return NULL;
+
+ return __Pyx__Coroutine_Throw(self, typ, val, tb, args, 1);
+}
+static CYTHON_INLINE int __Pyx_Coroutine_traverse_excstate(__Pyx_ExcInfoStruct *exc_state, visitproc visit, void *arg) {
+ Py_VISIT(exc_state->exc_type);
+ Py_VISIT(exc_state->exc_value);
+ Py_VISIT(exc_state->exc_traceback);
+ return 0;
+}
+
+static int __Pyx_Coroutine_traverse(__pyx_CoroutineObject *gen, visitproc visit, void *arg) {
Py_VISIT(gen->closure);
Py_VISIT(gen->classobj);
Py_VISIT(gen->yieldfrom);
- Py_VISIT(gen->exc_type);
- Py_VISIT(gen->exc_value);
- Py_VISIT(gen->exc_traceback);
- return 0;
+ return __Pyx_Coroutine_traverse_excstate(&gen->gi_exc_state, visit, arg);
}
static int __Pyx_Coroutine_clear(PyObject *self) {
@@ -847,11 +1104,16 @@
Py_CLEAR(gen->closure);
Py_CLEAR(gen->classobj);
Py_CLEAR(gen->yieldfrom);
- Py_CLEAR(gen->exc_type);
- Py_CLEAR(gen->exc_value);
- Py_CLEAR(gen->exc_traceback);
+ __Pyx_Coroutine_ExceptionClear(&gen->gi_exc_state);
+#ifdef __Pyx_AsyncGen_USED
+ if (__Pyx_AsyncGen_CheckExact(self)) {
+ Py_CLEAR(((__pyx_PyAsyncGenObject*)gen)->ag_finalizer);
+ }
+#endif
+ Py_CLEAR(gen->gi_code);
Py_CLEAR(gen->gi_name);
Py_CLEAR(gen->gi_qualname);
+ Py_CLEAR(gen->gi_modulename);
return 0;
}
@@ -862,10 +1124,10 @@
if (gen->gi_weakreflist != NULL)
PyObject_ClearWeakRefs(self);
- if (gen->resume_label > 0) {
- // Generator is paused, so we need to close
+ if (gen->resume_label >= 0) {
+ // Generator is paused or unstarted, so we need to close
PyObject_GC_Track(self);
-#if PY_VERSION_HEX >= 0x030400a1
+#if PY_VERSION_HEX >= 0x030400a1 && CYTHON_USE_TP_FINALIZE
if (PyObject_CallFinalizerFromDealloc(self))
#else
Py_TYPE(gen)->tp_del(self);
@@ -878,40 +1140,110 @@
PyObject_GC_UnTrack(self);
}
+#ifdef __Pyx_AsyncGen_USED
+ if (__Pyx_AsyncGen_CheckExact(self)) {
+ /* We have to handle this case for asynchronous generators
+ right here, because this code has to be between UNTRACK
+ and GC_Del. */
+ Py_CLEAR(((__pyx_PyAsyncGenObject*)self)->ag_finalizer);
+ }
+#endif
__Pyx_Coroutine_clear(self);
PyObject_GC_Del(gen);
}
static void __Pyx_Coroutine_del(PyObject *self) {
- PyObject *res;
PyObject *error_type, *error_value, *error_traceback;
__pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self;
__Pyx_PyThreadState_declare
- if (gen->resume_label <= 0)
- return ;
+ if (gen->resume_label < 0) {
+ // already terminated => nothing to clean up
+ return;
+ }
-#if PY_VERSION_HEX < 0x030400a1
+#if !CYTHON_USE_TP_FINALIZE
// Temporarily resurrect the object.
assert(self->ob_refcnt == 0);
self->ob_refcnt = 1;
#endif
- // Save the current exception, if any.
__Pyx_PyThreadState_assign
+
+ // Save the current exception, if any.
__Pyx_ErrFetch(&error_type, &error_value, &error_traceback);
- res = __Pyx_Coroutine_Close(self);
+#ifdef __Pyx_AsyncGen_USED
+ if (__Pyx_AsyncGen_CheckExact(self)) {
+ __pyx_PyAsyncGenObject *agen = (__pyx_PyAsyncGenObject*)self;
+ PyObject *finalizer = agen->ag_finalizer;
+ if (finalizer && !agen->ag_closed) {
+ PyObject *res = __Pyx_PyObject_CallOneArg(finalizer, self);
+ if (unlikely(!res)) {
+ PyErr_WriteUnraisable(self);
+ } else {
+ Py_DECREF(res);
+ }
+ // Restore the saved exception.
+ __Pyx_ErrRestore(error_type, error_value, error_traceback);
+ return;
+ }
+ }
+#endif
- if (res == NULL)
- PyErr_WriteUnraisable(self);
- else
- Py_DECREF(res);
+ if (unlikely(gen->resume_label == 0 && !error_value)) {
+#ifdef __Pyx_Coroutine_USED
+#ifdef __Pyx_Generator_USED
+ // only warn about (async) coroutines
+ if (!__Pyx_Generator_CheckExact(self))
+#endif
+ {
+ // untrack dead object as we are executing Python code (which might trigger GC)
+ PyObject_GC_UnTrack(self);
+#if PY_MAJOR_VERSION >= 3 /* PY_VERSION_HEX >= 0x03030000*/ || defined(PyErr_WarnFormat)
+ if (unlikely(PyErr_WarnFormat(PyExc_RuntimeWarning, 1, "coroutine '%.50S' was never awaited", gen->gi_qualname) < 0))
+ PyErr_WriteUnraisable(self);
+#else
+ {PyObject *msg;
+ char *cmsg;
+ #if CYTHON_COMPILING_IN_PYPY
+ msg = NULL;
+ cmsg = (char*) "coroutine was never awaited";
+ #else
+ char *cname;
+ PyObject *qualname;
+ qualname = gen->gi_qualname;
+ cname = PyString_AS_STRING(qualname);
+ msg = PyString_FromFormat("coroutine '%.50s' was never awaited", cname);
+
+ if (unlikely(!msg)) {
+ PyErr_Clear();
+ cmsg = (char*) "coroutine was never awaited";
+ } else {
+ cmsg = PyString_AS_STRING(msg);
+ }
+ #endif
+ if (unlikely(PyErr_WarnEx(PyExc_RuntimeWarning, cmsg, 1) < 0))
+ PyErr_WriteUnraisable(self);
+ Py_XDECREF(msg);}
+#endif
+ PyObject_GC_Track(self);
+ }
+#endif /*__Pyx_Coroutine_USED*/
+ } else {
+ PyObject *res = __Pyx_Coroutine_Close(self);
+ if (unlikely(!res)) {
+ if (PyErr_Occurred())
+ PyErr_WriteUnraisable(self);
+ } else {
+ Py_DECREF(res);
+ }
+ }
// Restore the saved exception.
__Pyx_ErrRestore(error_type, error_value, error_traceback);
-#if PY_VERSION_HEX < 0x030400a1
+#if !CYTHON_USE_TP_FINALIZE
// Undo the temporary resurrection; can't use DECREF here, it would
// cause a recursive call.
assert(self->ob_refcnt > 0);
@@ -948,7 +1280,7 @@
}
static PyObject *
-__Pyx_Coroutine_get_name(__pyx_CoroutineObject *self)
+__Pyx_Coroutine_get_name(__pyx_CoroutineObject *self, CYTHON_UNUSED void *context)
{
PyObject *name = self->gi_name;
// avoid NULL pointer dereference during garbage collection
@@ -958,15 +1290,16 @@
}
static int
-__Pyx_Coroutine_set_name(__pyx_CoroutineObject *self, PyObject *value)
+__Pyx_Coroutine_set_name(__pyx_CoroutineObject *self, PyObject *value, CYTHON_UNUSED void *context)
{
PyObject *tmp;
#if PY_MAJOR_VERSION >= 3
- if (unlikely(value == NULL || !PyUnicode_Check(value))) {
+ if (unlikely(value == NULL || !PyUnicode_Check(value)))
#else
- if (unlikely(value == NULL || !PyString_Check(value))) {
+ if (unlikely(value == NULL || !PyString_Check(value)))
#endif
+ {
PyErr_SetString(PyExc_TypeError,
"__name__ must be set to a string object");
return -1;
@@ -979,7 +1312,7 @@
}
static PyObject *
-__Pyx_Coroutine_get_qualname(__pyx_CoroutineObject *self)
+__Pyx_Coroutine_get_qualname(__pyx_CoroutineObject *self, CYTHON_UNUSED void *context)
{
PyObject *name = self->gi_qualname;
// avoid NULL pointer dereference during garbage collection
@@ -989,15 +1322,16 @@
}
static int
-__Pyx_Coroutine_set_qualname(__pyx_CoroutineObject *self, PyObject *value)
+__Pyx_Coroutine_set_qualname(__pyx_CoroutineObject *self, PyObject *value, CYTHON_UNUSED void *context)
{
PyObject *tmp;
#if PY_MAJOR_VERSION >= 3
- if (unlikely(value == NULL || !PyUnicode_Check(value))) {
+ if (unlikely(value == NULL || !PyUnicode_Check(value)))
#else
- if (unlikely(value == NULL || !PyString_Check(value))) {
+ if (unlikely(value == NULL || !PyString_Check(value)))
#endif
+ {
PyErr_SetString(PyExc_TypeError,
"__qualname__ must be set to a string object");
return -1;
@@ -1010,13 +1344,17 @@
}
static __pyx_CoroutineObject *__Pyx__Coroutine_New(
- PyTypeObject* type, __pyx_coroutine_body_t body, PyObject *closure,
+ PyTypeObject* type, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure,
PyObject *name, PyObject *qualname, PyObject *module_name) {
__pyx_CoroutineObject *gen = PyObject_GC_New(__pyx_CoroutineObject, type);
-
- if (gen == NULL)
+ if (unlikely(!gen))
return NULL;
+ return __Pyx__Coroutine_NewInit(gen, body, code, closure, name, qualname, module_name);
+}
+static __pyx_CoroutineObject *__Pyx__Coroutine_NewInit(
+ __pyx_CoroutineObject *gen, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure,
+ PyObject *name, PyObject *qualname, PyObject *module_name) {
gen->body = body;
gen->closure = closure;
Py_XINCREF(closure);
@@ -1024,9 +1362,12 @@
gen->resume_label = 0;
gen->classobj = NULL;
gen->yieldfrom = NULL;
- gen->exc_type = NULL;
- gen->exc_value = NULL;
- gen->exc_traceback = NULL;
+ gen->gi_exc_state.exc_type = NULL;
+ gen->gi_exc_state.exc_value = NULL;
+ gen->gi_exc_state.exc_traceback = NULL;
+#if CYTHON_USE_EXC_INFO_STACK
+ gen->gi_exc_state.previous_item = NULL;
+#endif
gen->gi_weakreflist = NULL;
Py_XINCREF(qualname);
gen->gi_qualname = qualname;
@@ -1034,6 +1375,8 @@
gen->gi_name = name;
Py_XINCREF(module_name);
gen->gi_modulename = module_name;
+ Py_XINCREF(code);
+ gen->gi_code = code;
PyObject_GC_Track(gen);
return gen;
@@ -1043,11 +1386,7 @@
//////////////////// Coroutine ////////////////////
//@requires: CoroutineBase
//@requires: PatchGeneratorABC
-
-typedef struct {
- PyObject_HEAD
- PyObject *coroutine;
-} __pyx_CoroutineAwaitObject;
+//@requires: ObjectHandling.c::PyObject_GenericGetAttrNoDict
static void __Pyx_CoroutineAwait_dealloc(PyObject *self) {
PyObject_GC_UnTrack(self);
@@ -1077,7 +1416,7 @@
return __Pyx_Coroutine_Throw(self->coroutine, args);
}
-static PyObject *__Pyx_CoroutineAwait_Close(__pyx_CoroutineAwaitObject *self) {
+static PyObject *__Pyx_CoroutineAwait_Close(__pyx_CoroutineAwaitObject *self, CYTHON_UNUSED PyObject *arg) {
return __Pyx_Coroutine_Close(self->coroutine);
}
@@ -1158,8 +1497,15 @@
#if PY_VERSION_HEX >= 0x030400a1
0, /*tp_finalize*/
#endif
+#if PY_VERSION_HEX >= 0x030800b1
+ 0, /*tp_vectorcall*/
+#endif
+#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000
+ 0, /*tp_print*/
+#endif
};
+#if PY_VERSION_HEX < 0x030500B1 || defined(__Pyx_IterableCoroutine_USED) || CYTHON_USE_ASYNC_SLOTS
static CYTHON_INLINE PyObject *__Pyx__Coroutine_await(PyObject *coroutine) {
__pyx_CoroutineAwaitObject *await = PyObject_GC_New(__pyx_CoroutineAwaitObject, __pyx_CoroutineAwaitType);
if (unlikely(!await)) return NULL;
@@ -1168,72 +1514,29 @@
PyObject_GC_Track(await);
return (PyObject*)await;
}
+#endif
+#if PY_VERSION_HEX < 0x030500B1
+static PyObject *__Pyx_Coroutine_await_method(PyObject *coroutine, CYTHON_UNUSED PyObject *arg) {
+ return __Pyx__Coroutine_await(coroutine);
+}
+#endif
+
+#if defined(__Pyx_IterableCoroutine_USED) || CYTHON_USE_ASYNC_SLOTS
static PyObject *__Pyx_Coroutine_await(PyObject *coroutine) {
- if (unlikely(!coroutine || !__Pyx_Coroutine_CheckExact(coroutine))) {
+ if (unlikely(!coroutine || !__Pyx_Coroutine_Check(coroutine))) {
PyErr_SetString(PyExc_TypeError, "invalid input, expected coroutine");
return NULL;
}
return __Pyx__Coroutine_await(coroutine);
}
-
-static void __Pyx_Coroutine_check_and_dealloc(PyObject *self) {
- __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self;
-
- if (gen->resume_label == 0 && !PyErr_Occurred()) {
- // untrack dead object as we are executing Python code (which might trigger GC)
- PyObject_GC_UnTrack(self);
-#if PY_VERSION_HEX >= 0x03030000 || defined(PyErr_WarnFormat)
- PyErr_WarnFormat(PyExc_RuntimeWarning, 1, "coroutine '%.50S' was never awaited", gen->gi_qualname);
- PyErr_Clear(); /* just in case, must not keep a live exception during GC */
-#else
- {PyObject *msg;
- char *cmsg;
- #if CYTHON_COMPILING_IN_PYPY
- msg = NULL;
- cmsg = (char*) "coroutine was never awaited";
- #else
- char *cname;
- PyObject *qualname;
- #if PY_MAJOR_VERSION >= 3
- qualname = PyUnicode_AsUTF8String(gen->gi_qualname);
- if (likely(qualname)) {
- cname = PyBytes_AS_STRING(qualname);
- } else {
- PyErr_Clear();
- cname = (char*) "?";
- }
- msg = PyBytes_FromFormat(
- #else
- qualname = gen->gi_qualname;
- cname = PyString_AS_STRING(qualname);
- msg = PyString_FromFormat(
- #endif
- "coroutine '%.50s' was never awaited", cname);
-
- #if PY_MAJOR_VERSION >= 3
- Py_XDECREF(qualname);
- #endif
-
- if (unlikely(!msg)) {
- PyErr_Clear();
- cmsg = (char*) "coroutine was never awaited";
- } else {
- #if PY_MAJOR_VERSION >= 3
- cmsg = PyBytes_AS_STRING(msg);
- #else
- cmsg = PyString_AS_STRING(msg);
- #endif
- }
- #endif
- if (unlikely(PyErr_WarnEx(PyExc_RuntimeWarning, cmsg, 1) < 0))
- PyErr_WriteUnraisable(self);
- Py_XDECREF(msg);}
#endif
- PyObject_GC_Track(self);
- }
- __Pyx_Coroutine_dealloc(self);
+static PyObject *
+__Pyx_Coroutine_get_frame(CYTHON_UNUSED __pyx_CoroutineObject *self, CYTHON_UNUSED void *context)
+{
+ // Fake implementation that always returns None, but at least does not raise an AttributeError.
+ Py_RETURN_NONE;
}
#if CYTHON_COMPILING_IN_CPYTHON && PY_MAJOR_VERSION >= 3 && PY_VERSION_HEX < 0x030500B1
@@ -1255,10 +1558,10 @@
(char*) PyDoc_STR("send(arg) -> send 'arg' into coroutine,\nreturn next iterated value or raise StopIteration.")},
{"throw", (PyCFunction) __Pyx_Coroutine_Throw, METH_VARARGS,
(char*) PyDoc_STR("throw(typ[,val[,tb]]) -> raise exception in coroutine,\nreturn next iterated value or raise StopIteration.")},
- {"close", (PyCFunction) __Pyx_Coroutine_Close, METH_NOARGS,
+ {"close", (PyCFunction) __Pyx_Coroutine_Close_Method, METH_NOARGS,
(char*) PyDoc_STR("close() -> raise GeneratorExit inside coroutine.")},
#if PY_VERSION_HEX < 0x030500B1
- {"__await__", (PyCFunction) __Pyx_Coroutine_await, METH_NOARGS,
+ {"__await__", (PyCFunction) __Pyx_Coroutine_await_method, METH_NOARGS,
(char*) PyDoc_STR("__await__() -> return an iterator to be used in await expression.")},
#endif
{0, 0, 0, 0}
@@ -1268,6 +1571,7 @@
{(char *) "cr_running", T_BOOL, offsetof(__pyx_CoroutineObject, is_running), READONLY, NULL},
{(char*) "cr_await", T_OBJECT, offsetof(__pyx_CoroutineObject, yieldfrom), READONLY,
(char*) PyDoc_STR("object being awaited, or None")},
+ {(char*) "cr_code", T_OBJECT, offsetof(__pyx_CoroutineObject, gi_code), READONLY, NULL},
{(char *) "__module__", T_OBJECT, offsetof(__pyx_CoroutineObject, gi_modulename), PY_WRITE_RESTRICTED, 0},
{0, 0, 0, 0, 0}
};
@@ -1277,6 +1581,8 @@
(char*) PyDoc_STR("name of the coroutine"), 0},
{(char *) "__qualname__", (getter)__Pyx_Coroutine_get_qualname, (setter)__Pyx_Coroutine_set_qualname,
(char*) PyDoc_STR("qualified name of the coroutine"), 0},
+ {(char *) "cr_frame", (getter)__Pyx_Coroutine_get_frame, NULL,
+ (char*) PyDoc_STR("Frame of the coroutine"), 0},
{0, 0, 0, 0, 0}
};
@@ -1293,7 +1599,7 @@
"coroutine", /*tp_name*/
sizeof(__pyx_CoroutineObject), /*tp_basicsize*/
0, /*tp_itemsize*/
- (destructor) __Pyx_Coroutine_check_and_dealloc,/*tp_dealloc*/
+ (destructor) __Pyx_Coroutine_dealloc,/*tp_dealloc*/
0, /*tp_print*/
0, /*tp_getattr*/
0, /*tp_setattr*/
@@ -1316,14 +1622,14 @@
0, /*tp_doc*/
(traverseproc) __Pyx_Coroutine_traverse, /*tp_traverse*/
0, /*tp_clear*/
-#if CYTHON_COMPILING_IN_CPYTHON && PY_MAJOR_VERSION >= 3 && PY_VERSION_HEX < 0x030500B1
+#if CYTHON_USE_ASYNC_SLOTS && CYTHON_COMPILING_IN_CPYTHON && PY_MAJOR_VERSION >= 3 && PY_VERSION_HEX < 0x030500B1
// in order to (mis-)use tp_reserved above, we must also implement tp_richcompare
__Pyx_Coroutine_compare, /*tp_richcompare*/
#else
0, /*tp_richcompare*/
#endif
offsetof(__pyx_CoroutineObject, gi_weakreflist), /*tp_weaklistoffset*/
-// no tp_iter() as iterator is only available through __await__()
+ // no tp_iter() as iterator is only available through __await__()
0, /*tp_iter*/
0, /*tp_iternext*/
__pyx_Coroutine_methods, /*tp_methods*/
@@ -1344,41 +1650,157 @@
0, /*tp_cache*/
0, /*tp_subclasses*/
0, /*tp_weaklist*/
-#if PY_VERSION_HEX >= 0x030400a1
+#if CYTHON_USE_TP_FINALIZE
0, /*tp_del*/
#else
__Pyx_Coroutine_del, /*tp_del*/
#endif
0, /*tp_version_tag*/
-#if PY_VERSION_HEX >= 0x030400a1
+#if CYTHON_USE_TP_FINALIZE
__Pyx_Coroutine_del, /*tp_finalize*/
+#elif PY_VERSION_HEX >= 0x030400a1
+ 0, /*tp_finalize*/
+#endif
+#if PY_VERSION_HEX >= 0x030800b1
+ 0, /*tp_vectorcall*/
+#endif
+#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000
+ 0, /*tp_print*/
#endif
};
static int __pyx_Coroutine_init(void) {
// on Windows, C-API functions can't be used in slots statically
- __pyx_CoroutineType_type.tp_getattro = PyObject_GenericGetAttr;
-
+ __pyx_CoroutineType_type.tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict;
__pyx_CoroutineType = __Pyx_FetchCommonType(&__pyx_CoroutineType_type);
if (unlikely(!__pyx_CoroutineType))
return -1;
+#ifdef __Pyx_IterableCoroutine_USED
+ if (unlikely(__pyx_IterableCoroutine_init() == -1))
+ return -1;
+#endif
+
__pyx_CoroutineAwaitType = __Pyx_FetchCommonType(&__pyx_CoroutineAwaitType_type);
if (unlikely(!__pyx_CoroutineAwaitType))
return -1;
return 0;
}
+
+//////////////////// IterableCoroutine.proto ////////////////////
+
+#define __Pyx_IterableCoroutine_USED
+
+static PyTypeObject *__pyx_IterableCoroutineType = 0;
+
+#undef __Pyx_Coroutine_Check
+#define __Pyx_Coroutine_Check(obj) (__Pyx_Coroutine_CheckExact(obj) || (Py_TYPE(obj) == __pyx_IterableCoroutineType))
+
+#define __Pyx_IterableCoroutine_New(body, code, closure, name, qualname, module_name) \
+ __Pyx__Coroutine_New(__pyx_IterableCoroutineType, body, code, closure, name, qualname, module_name)
+
+static int __pyx_IterableCoroutine_init(void);/*proto*/
+
+
+//////////////////// IterableCoroutine ////////////////////
+//@requires: Coroutine
+//@requires: CommonStructures.c::FetchCommonType
+
+static PyTypeObject __pyx_IterableCoroutineType_type = {
+ PyVarObject_HEAD_INIT(0, 0)
+ "iterable_coroutine", /*tp_name*/
+ sizeof(__pyx_CoroutineObject), /*tp_basicsize*/
+ 0, /*tp_itemsize*/
+ (destructor) __Pyx_Coroutine_dealloc,/*tp_dealloc*/
+ 0, /*tp_print*/
+ 0, /*tp_getattr*/
+ 0, /*tp_setattr*/
+#if CYTHON_USE_ASYNC_SLOTS
+ &__pyx_Coroutine_as_async, /*tp_as_async (tp_reserved) - Py3 only! */
+#else
+ 0, /*tp_reserved*/
+#endif
+ 0, /*tp_repr*/
+ 0, /*tp_as_number*/
+ 0, /*tp_as_sequence*/
+ 0, /*tp_as_mapping*/
+ 0, /*tp_hash*/
+ 0, /*tp_call*/
+ 0, /*tp_str*/
+ 0, /*tp_getattro*/
+ 0, /*tp_setattro*/
+ 0, /*tp_as_buffer*/
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_HAVE_FINALIZE, /*tp_flags*/
+ 0, /*tp_doc*/
+ (traverseproc) __Pyx_Coroutine_traverse, /*tp_traverse*/
+ 0, /*tp_clear*/
+#if CYTHON_USE_ASYNC_SLOTS && CYTHON_COMPILING_IN_CPYTHON && PY_MAJOR_VERSION >= 3 && PY_VERSION_HEX < 0x030500B1
+ // in order to (mis-)use tp_reserved above, we must also implement tp_richcompare
+ __Pyx_Coroutine_compare, /*tp_richcompare*/
+#else
+ 0, /*tp_richcompare*/
+#endif
+ offsetof(__pyx_CoroutineObject, gi_weakreflist), /*tp_weaklistoffset*/
+ // enable iteration for legacy support of asyncio yield-from protocol
+ __Pyx_Coroutine_await, /*tp_iter*/
+ (iternextfunc) __Pyx_Generator_Next, /*tp_iternext*/
+ __pyx_Coroutine_methods, /*tp_methods*/
+ __pyx_Coroutine_memberlist, /*tp_members*/
+ __pyx_Coroutine_getsets, /*tp_getset*/
+ 0, /*tp_base*/
+ 0, /*tp_dict*/
+ 0, /*tp_descr_get*/
+ 0, /*tp_descr_set*/
+ 0, /*tp_dictoffset*/
+ 0, /*tp_init*/
+ 0, /*tp_alloc*/
+ 0, /*tp_new*/
+ 0, /*tp_free*/
+ 0, /*tp_is_gc*/
+ 0, /*tp_bases*/
+ 0, /*tp_mro*/
+ 0, /*tp_cache*/
+ 0, /*tp_subclasses*/
+ 0, /*tp_weaklist*/
+#if PY_VERSION_HEX >= 0x030400a1
+ 0, /*tp_del*/
+#else
+ __Pyx_Coroutine_del, /*tp_del*/
+#endif
+ 0, /*tp_version_tag*/
+#if PY_VERSION_HEX >= 0x030400a1
+ __Pyx_Coroutine_del, /*tp_finalize*/
+#endif
+#if PY_VERSION_HEX >= 0x030800b1
+ 0, /*tp_vectorcall*/
+#endif
+#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000
+ 0, /*tp_print*/
+#endif
+};
+
+
+static int __pyx_IterableCoroutine_init(void) {
+ __pyx_IterableCoroutineType_type.tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict;
+ __pyx_IterableCoroutineType = __Pyx_FetchCommonType(&__pyx_IterableCoroutineType_type);
+ if (unlikely(!__pyx_IterableCoroutineType))
+ return -1;
+ return 0;
+}
+
+
//////////////////// Generator ////////////////////
//@requires: CoroutineBase
//@requires: PatchGeneratorABC
+//@requires: ObjectHandling.c::PyObject_GenericGetAttrNoDict
static PyMethodDef __pyx_Generator_methods[] = {
{"send", (PyCFunction) __Pyx_Coroutine_Send, METH_O,
(char*) PyDoc_STR("send(arg) -> send 'arg' into generator,\nreturn next yielded value or raise StopIteration.")},
{"throw", (PyCFunction) __Pyx_Coroutine_Throw, METH_VARARGS,
(char*) PyDoc_STR("throw(typ[,val[,tb]]) -> raise exception in generator,\nreturn next yielded value or raise StopIteration.")},
- {"close", (PyCFunction) __Pyx_Coroutine_Close, METH_NOARGS,
+ {"close", (PyCFunction) __Pyx_Coroutine_Close_Method, METH_NOARGS,
(char*) PyDoc_STR("close() -> raise GeneratorExit inside generator.")},
{0, 0, 0, 0}
};
@@ -1387,6 +1809,7 @@
{(char *) "gi_running", T_BOOL, offsetof(__pyx_CoroutineObject, is_running), READONLY, NULL},
{(char*) "gi_yieldfrom", T_OBJECT, offsetof(__pyx_CoroutineObject, yieldfrom), READONLY,
(char*) PyDoc_STR("object being iterated by 'yield from', or None")},
+ {(char*) "gi_code", T_OBJECT, offsetof(__pyx_CoroutineObject, gi_code), READONLY, NULL},
{0, 0, 0, 0, 0}
};
@@ -1444,20 +1867,28 @@
0, /*tp_cache*/
0, /*tp_subclasses*/
0, /*tp_weaklist*/
-#if PY_VERSION_HEX >= 0x030400a1
+#if CYTHON_USE_TP_FINALIZE
0, /*tp_del*/
#else
__Pyx_Coroutine_del, /*tp_del*/
#endif
0, /*tp_version_tag*/
-#if PY_VERSION_HEX >= 0x030400a1
+#if CYTHON_USE_TP_FINALIZE
__Pyx_Coroutine_del, /*tp_finalize*/
+#elif PY_VERSION_HEX >= 0x030400a1
+ 0, /*tp_finalize*/
+#endif
+#if PY_VERSION_HEX >= 0x030800b1
+ 0, /*tp_vectorcall*/
+#endif
+#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000
+ 0, /*tp_print*/
#endif
};
static int __pyx_Generator_init(void) {
// on Windows, C-API functions can't be used in slots statically
- __pyx_GeneratorType_type.tp_getattro = PyObject_GenericGetAttr;
+ __pyx_GeneratorType_type.tp_getattro = __Pyx_PyObject_GenericGetAttrNoDict;
__pyx_GeneratorType_type.tp_iter = PyObject_SelfIter;
__pyx_GeneratorType = __Pyx_FetchCommonType(&__pyx_GeneratorType_type);
@@ -1482,13 +1913,15 @@
// 1) Instantiating an exception just to pass back a value is costly.
// 2) CPython 3.3 <= x < 3.5b1 crash in yield-from when the StopIteration is not instantiated.
// 3) Passing a tuple as value into PyErr_SetObject() passes its items on as arguments.
-// 4) If there is currently an exception being handled, we need to chain it.
+// 4) Passing an exception as value will interpret it as an exception on unpacking and raise it (or unpack its value).
+// 5) If there is currently an exception being handled, we need to chain it.
static void __Pyx__ReturnWithStopIteration(PyObject* value) {
PyObject *exc, *args;
#if CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_PYSTON
__Pyx_PyThreadState_declare
- if ((PY_VERSION_HEX >= 0x03030000 && PY_VERSION_HEX < 0x030500B1) || unlikely(PyTuple_Check(value))) {
+ if ((PY_VERSION_HEX >= 0x03030000 && PY_VERSION_HEX < 0x030500B1)
+ || unlikely(PyTuple_Check(value) || PyExceptionInstance_Check(value))) {
args = PyTuple_New(1);
if (unlikely(!args)) return;
Py_INCREF(value);
@@ -1501,13 +1934,20 @@
Py_INCREF(value);
exc = value;
}
+ #if CYTHON_FAST_THREAD_STATE
__Pyx_PyThreadState_assign
- if (!$local_tstate_cname->exc_type) {
+ #if CYTHON_USE_EXC_INFO_STACK
+ if (!$local_tstate_cname->exc_info->exc_type)
+ #else
+ if (!$local_tstate_cname->exc_type)
+ #endif
+ {
// no chaining needed => avoid the overhead in PyErr_SetObject()
Py_INCREF(PyExc_StopIteration);
__Pyx_ErrRestore(PyExc_StopIteration, exc, NULL);
return;
}
+ #endif
#else
args = PyTuple_Pack(1, value);
if (unlikely(!args)) return;
@@ -1606,11 +2046,11 @@
static int abc_patched = 0;
if (CYTHON_REGISTER_ABCS && !abc_patched) {
PyObject *module;
- module = PyImport_ImportModule((PY_VERSION_HEX >= 0x03030000) ? "collections.abc" : "collections");
+ module = PyImport_ImportModule((PY_MAJOR_VERSION >= 3) ? "collections.abc" : "collections");
if (!module) {
PyErr_WriteUnraisable(NULL);
if (unlikely(PyErr_WarnEx(PyExc_RuntimeWarning,
- ((PY_VERSION_HEX >= 0x03030000) ?
+ ((PY_MAJOR_VERSION >= 3) ?
"Cython module failed to register with collections.abc module" :
"Cython module failed to register with collections module"), 1) < 0)) {
return -1;
@@ -1675,7 +2115,8 @@
);
} else {
PyErr_Clear();
-#if PY_VERSION_HEX < 0x03040200
+// Always enable fallback: even if we compile against 3.4.2, we might be running on 3.4.1 at some point.
+//#if PY_VERSION_HEX < 0x03040200
// Py3.4.1 used to have asyncio.tasks instead of asyncio.coroutines
package = __Pyx_Import(PYIDENT("asyncio.tasks"), NULL, 0);
if (unlikely(!package)) goto asyncio_done;
@@ -1696,15 +2137,15 @@
old_types.add(_cython_generator_type)
""")
);
-#endif
+//#endif
// Py < 0x03040200
}
Py_DECREF(package);
if (unlikely(!patch_module)) goto ignore;
-#if PY_VERSION_HEX < 0x03040200
+//#if PY_VERSION_HEX < 0x03040200
asyncio_done:
PyErr_Clear();
-#endif
+//#endif
asyncio_patched = 1;
#ifdef __Pyx_Generator_USED
// now patch inspect.isgenerator() by looking up the imported module in the patched asyncio module
diff -Nru cython-0.26.1/Cython/Utility/CppConvert.pyx cython-0.29.14/Cython/Utility/CppConvert.pyx
--- cython-0.26.1/Cython/Utility/CppConvert.pyx 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Utility/CppConvert.pyx 2019-11-01 14:13:39.000000000 +0000
@@ -11,7 +11,7 @@
@cname("{{cname}}")
cdef string {{cname}}(object o) except *:
- cdef Py_ssize_t length
+ cdef Py_ssize_t length = 0
cdef const char* data = __Pyx_PyObject_AsStringAndSize(o, &length)
return string(data, length)
diff -Nru cython-0.26.1/Cython/Utility/CppSupport.cpp cython-0.29.14/Cython/Utility/CppSupport.cpp
--- cython-0.26.1/Cython/Utility/CppSupport.cpp 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Utility/CppSupport.cpp 2018-09-22 14:18:56.000000000 +0000
@@ -50,7 +50,7 @@
/////////////// PythranConversion.proto ///////////////
template
-auto to_python_from_expr(T &&value) -> decltype(to_python(
+auto __Pyx_pythran_to_python(T &&value) -> decltype(to_python(
typename pythonic::returnable::type>::type>::type{std::forward(value)}))
{
using returnable_type = typename pythonic::returnable::type>::type>::type;
diff -Nru cython-0.26.1/Cython/Utility/CythonFunction.c cython-0.29.14/Cython/Utility/CythonFunction.c
--- cython-0.26.1/Cython/Utility/CythonFunction.c 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Utility/CythonFunction.c 2019-11-01 14:13:39.000000000 +0000
@@ -2,7 +2,6 @@
//////////////////// CythonFunction.proto ////////////////////
#define __Pyx_CyFunction_USED 1
-#include
#define __Pyx_CYFUNCTION_STATICMETHOD 0x01
#define __Pyx_CYFUNCTION_CLASSMETHOD 0x02
@@ -48,6 +47,8 @@
static PyTypeObject *__pyx_CyFunctionType = 0;
+#define __Pyx_CyFunction_Check(obj) (__Pyx_TypeCheck(obj, __pyx_CyFunctionType))
+
#define __Pyx_CyFunction_NewEx(ml, flags, qualname, self, module, globals, code) \
__Pyx_CyFunction_New(__pyx_CyFunctionType, ml, flags, qualname, self, module, globals, code)
@@ -75,6 +76,8 @@
//@requires: CommonStructures.c::FetchCommonType
////@requires: ObjectHandling.c::PyObjectGetAttrStr
+#include
+
static PyObject *
__Pyx_CyFunction_get_doc(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *closure)
{
@@ -97,7 +100,7 @@
}
static int
-__Pyx_CyFunction_set_doc(__pyx_CyFunctionObject *op, PyObject *value)
+__Pyx_CyFunction_set_doc(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context)
{
PyObject *tmp = op->func_doc;
if (value == NULL) {
@@ -111,7 +114,7 @@
}
static PyObject *
-__Pyx_CyFunction_get_name(__pyx_CyFunctionObject *op)
+__Pyx_CyFunction_get_name(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)
{
if (unlikely(op->func_name == NULL)) {
#if PY_MAJOR_VERSION >= 3
@@ -127,15 +130,16 @@
}
static int
-__Pyx_CyFunction_set_name(__pyx_CyFunctionObject *op, PyObject *value)
+__Pyx_CyFunction_set_name(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context)
{
PyObject *tmp;
#if PY_MAJOR_VERSION >= 3
- if (unlikely(value == NULL || !PyUnicode_Check(value))) {
+ if (unlikely(value == NULL || !PyUnicode_Check(value)))
#else
- if (unlikely(value == NULL || !PyString_Check(value))) {
+ if (unlikely(value == NULL || !PyString_Check(value)))
#endif
+ {
PyErr_SetString(PyExc_TypeError,
"__name__ must be set to a string object");
return -1;
@@ -148,22 +152,23 @@
}
static PyObject *
-__Pyx_CyFunction_get_qualname(__pyx_CyFunctionObject *op)
+__Pyx_CyFunction_get_qualname(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)
{
Py_INCREF(op->func_qualname);
return op->func_qualname;
}
static int
-__Pyx_CyFunction_set_qualname(__pyx_CyFunctionObject *op, PyObject *value)
+__Pyx_CyFunction_set_qualname(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context)
{
PyObject *tmp;
#if PY_MAJOR_VERSION >= 3
- if (unlikely(value == NULL || !PyUnicode_Check(value))) {
+ if (unlikely(value == NULL || !PyUnicode_Check(value)))
#else
- if (unlikely(value == NULL || !PyString_Check(value))) {
+ if (unlikely(value == NULL || !PyString_Check(value)))
#endif
+ {
PyErr_SetString(PyExc_TypeError,
"__qualname__ must be set to a string object");
return -1;
@@ -188,7 +193,7 @@
}
static PyObject *
-__Pyx_CyFunction_get_dict(__pyx_CyFunctionObject *op)
+__Pyx_CyFunction_get_dict(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)
{
if (unlikely(op->func_dict == NULL)) {
op->func_dict = PyDict_New();
@@ -200,7 +205,7 @@
}
static int
-__Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value)
+__Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context)
{
PyObject *tmp;
@@ -222,21 +227,21 @@
}
static PyObject *
-__Pyx_CyFunction_get_globals(__pyx_CyFunctionObject *op)
+__Pyx_CyFunction_get_globals(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)
{
Py_INCREF(op->func_globals);
return op->func_globals;
}
static PyObject *
-__Pyx_CyFunction_get_closure(CYTHON_UNUSED __pyx_CyFunctionObject *op)
+__Pyx_CyFunction_get_closure(CYTHON_UNUSED __pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)
{
Py_INCREF(Py_None);
return Py_None;
}
static PyObject *
-__Pyx_CyFunction_get_code(__pyx_CyFunctionObject *op)
+__Pyx_CyFunction_get_code(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)
{
PyObject* result = (op->func_code) ? op->func_code : Py_None;
Py_INCREF(result);
@@ -269,7 +274,7 @@
}
static int
-__Pyx_CyFunction_set_defaults(__pyx_CyFunctionObject *op, PyObject* value) {
+__Pyx_CyFunction_set_defaults(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) {
PyObject* tmp;
if (!value) {
// del => explicit None to prevent rebuilding
@@ -287,7 +292,7 @@
}
static PyObject *
-__Pyx_CyFunction_get_defaults(__pyx_CyFunctionObject *op) {
+__Pyx_CyFunction_get_defaults(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) {
PyObject* result = op->defaults_tuple;
if (unlikely(!result)) {
if (op->defaults_getter) {
@@ -302,7 +307,7 @@
}
static int
-__Pyx_CyFunction_set_kwdefaults(__pyx_CyFunctionObject *op, PyObject* value) {
+__Pyx_CyFunction_set_kwdefaults(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) {
PyObject* tmp;
if (!value) {
// del => explicit None to prevent rebuilding
@@ -320,7 +325,7 @@
}
static PyObject *
-__Pyx_CyFunction_get_kwdefaults(__pyx_CyFunctionObject *op) {
+__Pyx_CyFunction_get_kwdefaults(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) {
PyObject* result = op->defaults_kwdict;
if (unlikely(!result)) {
if (op->defaults_getter) {
@@ -335,7 +340,7 @@
}
static int
-__Pyx_CyFunction_set_annotations(__pyx_CyFunctionObject *op, PyObject* value) {
+__Pyx_CyFunction_set_annotations(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) {
PyObject* tmp;
if (!value || value == Py_None) {
value = NULL;
@@ -352,7 +357,7 @@
}
static PyObject *
-__Pyx_CyFunction_get_annotations(__pyx_CyFunctionObject *op) {
+__Pyx_CyFunction_get_annotations(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) {
PyObject* result = op->func_annotations;
if (unlikely(!result)) {
result = PyDict_New();
@@ -365,7 +370,7 @@
//#if PY_VERSION_HEX >= 0x030400C1
//static PyObject *
-//__Pyx_CyFunction_get_signature(__pyx_CyFunctionObject *op) {
+//__Pyx_CyFunction_get_signature(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) {
// PyObject *inspect_module, *signature_class, *signature;
// // from inspect import Signature
// inspect_module = PyImport_ImportModuleLevelObject(PYIDENT("inspect"), NULL, NULL, NULL, 0);
@@ -414,7 +419,7 @@
};
static PyMemberDef __pyx_CyFunction_members[] = {
- {(char *) "__module__", T_OBJECT, offsetof(__pyx_CyFunctionObject, func.m_module), PY_WRITE_RESTRICTED, 0},
+ {(char *) "__module__", T_OBJECT, offsetof(PyCFunctionObject, m_module), PY_WRITE_RESTRICTED, 0},
{0, 0, 0, 0, 0}
};
@@ -505,15 +510,20 @@
return 0;
}
-static void __Pyx_CyFunction_dealloc(__pyx_CyFunctionObject *m)
+static void __Pyx__CyFunction_dealloc(__pyx_CyFunctionObject *m)
{
- PyObject_GC_UnTrack(m);
if (__Pyx_CyFunction_weakreflist(m) != NULL)
PyObject_ClearWeakRefs((PyObject *) m);
__Pyx_CyFunction_clear(m);
PyObject_GC_Del(m);
}
+static void __Pyx_CyFunction_dealloc(__pyx_CyFunctionObject *m)
+{
+ PyObject_GC_UnTrack(m);
+ __Pyx__CyFunction_dealloc(m);
+}
+
static int __Pyx_CyFunction_traverse(__pyx_CyFunctionObject *m, visitproc visit, void *arg)
{
Py_VISIT(m->func_closure);
@@ -583,7 +593,7 @@
return (*meth)(self, arg);
break;
case METH_VARARGS | METH_KEYWORDS:
- return (*(PyCFunctionWithKeywords)meth)(self, arg, kw);
+ return (*(PyCFunctionWithKeywords)(void*)meth)(self, arg, kw);
case METH_NOARGS:
if (likely(kw == NULL || PyDict_Size(kw) == 0)) {
size = PyTuple_GET_SIZE(arg);
@@ -599,10 +609,16 @@
if (likely(kw == NULL || PyDict_Size(kw) == 0)) {
size = PyTuple_GET_SIZE(arg);
if (likely(size == 1)) {
- PyObject *result, *arg0 = PySequence_ITEM(arg, 0);
- if (unlikely(!arg0)) return NULL;
+ PyObject *result, *arg0;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ arg0 = PyTuple_GET_ITEM(arg, 0);
+ #else
+ arg0 = PySequence_ITEM(arg, 0); if (unlikely(!arg0)) return NULL;
+ #endif
result = (*meth)(self, arg0);
+ #if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS)
Py_DECREF(arg0);
+ #endif
return result;
}
PyErr_Format(PyExc_TypeError,
@@ -714,12 +730,18 @@
#if PY_VERSION_HEX >= 0x030400a1
0, /*tp_finalize*/
#endif
+#if PY_VERSION_HEX >= 0x030800b1
+ 0, /*tp_vectorcall*/
+#endif
+#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000
+ 0, /*tp_print*/
+#endif
};
static int __pyx_CyFunction_init(void) {
__pyx_CyFunctionType = __Pyx_FetchCommonType(&__pyx_CyFunctionType_type);
- if (__pyx_CyFunctionType == NULL) {
+ if (unlikely(__pyx_CyFunctionType == NULL)) {
return -1;
}
return 0;
@@ -729,7 +751,7 @@
__pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;
m->defaults = PyObject_Malloc(size);
- if (!m->defaults)
+ if (unlikely(!m->defaults))
return PyErr_NoMemory();
memset(m->defaults, 0, size);
m->defaults_pyobjects = pyobjects;
@@ -755,12 +777,12 @@
}
//////////////////// CyFunctionClassCell.proto ////////////////////
-static CYTHON_INLINE int __Pyx_CyFunction_InitClassCell(PyObject *cyfunctions, PyObject *classobj);
+static int __Pyx_CyFunction_InitClassCell(PyObject *cyfunctions, PyObject *classobj);/*proto*/
//////////////////// CyFunctionClassCell ////////////////////
//@requires: CythonFunction
-static CYTHON_INLINE int __Pyx_CyFunction_InitClassCell(PyObject *cyfunctions, PyObject *classobj) {
+static int __Pyx_CyFunction_InitClassCell(PyObject *cyfunctions, PyObject *classobj) {
Py_ssize_t i, count = PyList_GET_SIZE(cyfunctions);
for (i = 0; i < count; i++) {
@@ -824,9 +846,14 @@
return (PyObject *) fusedfunc;
}
-static void __pyx_FusedFunction_dealloc(__pyx_FusedFunctionObject *self) {
- __pyx_FusedFunction_clear(self);
- __pyx_FusedFunctionType->tp_free((PyObject *) self);
+static void
+__pyx_FusedFunction_dealloc(__pyx_FusedFunctionObject *self)
+{
+ PyObject_GC_UnTrack(self);
+ Py_CLEAR(self->self);
+ Py_CLEAR(self->type);
+ Py_CLEAR(self->__signatures__);
+ __Pyx__CyFunction_dealloc((__pyx_CyFunctionObject *) self);
}
static int
@@ -1182,6 +1209,12 @@
#if PY_VERSION_HEX >= 0x030400a1
0, /*tp_finalize*/
#endif
+#if PY_VERSION_HEX >= 0x030800b1
+ 0, /*tp_vectorcall*/
+#endif
+#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000
+ 0, /*tp_print*/
+#endif
};
static int __pyx_FusedFunction_init(void) {
@@ -1195,20 +1228,20 @@
//////////////////// ClassMethod.proto ////////////////////
#include "descrobject.h"
-static PyObject* __Pyx_Method_ClassMethod(PyObject *method); /*proto*/
+static CYTHON_UNUSED PyObject* __Pyx_Method_ClassMethod(PyObject *method); /*proto*/
//////////////////// ClassMethod ////////////////////
static PyObject* __Pyx_Method_ClassMethod(PyObject *method) {
-#if CYTHON_COMPILING_IN_PYPY
+#if CYTHON_COMPILING_IN_PYPY && PYPY_VERSION_NUM <= 0x05080000
if (PyObject_TypeCheck(method, &PyWrapperDescr_Type)) {
// cdef classes
return PyClassMethod_New(method);
}
#else
-#if CYTHON_COMPILING_IN_PYSTON
- // special C-API function only in Pyston
- if (PyMethodDescr_Check(method)) {
+#if CYTHON_COMPILING_IN_PYSTON || CYTHON_COMPILING_IN_PYPY
+ // special C-API function only in Pyston and PyPy >= 5.9
+ if (PyMethodDescr_Check(method))
#else
// It appears that PyMethodDescr_Type is not exposed anywhere in the CPython C-API
static PyTypeObject *methoddescr_type = NULL;
@@ -1218,8 +1251,9 @@
methoddescr_type = Py_TYPE(meth);
Py_DECREF(meth);
}
- if (PyObject_TypeCheck(method, methoddescr_type)) {
+ if (__Pyx_TypeCheck(method, methoddescr_type))
#endif
+ {
// cdef classes
PyMethodDescrObject *descr = (PyMethodDescrObject *)method;
#if PY_VERSION_HEX < 0x03020000
@@ -1238,7 +1272,7 @@
return PyClassMethod_New(method);
}
#ifdef __Pyx_CyFunction_USED
- else if (PyObject_TypeCheck(method, __pyx_CyFunctionType)) {
+ else if (__Pyx_CyFunction_Check(method)) {
return PyClassMethod_New(method);
}
#endif
diff -Nru cython-0.26.1/Cython/Utility/Embed.c cython-0.29.14/Cython/Utility/Embed.c
--- cython-0.26.1/Cython/Utility/Embed.c 2015-09-10 16:25:36.000000000 +0000
+++ cython-0.29.14/Cython/Utility/Embed.c 2018-09-22 14:18:56.000000000 +0000
@@ -32,6 +32,20 @@
%(module_is_main)s = 1;
#if PY_MAJOR_VERSION < 3
init%(module_name)s();
+ #elif CYTHON_PEP489_MULTI_PHASE_INIT
+ m = PyInit_%(module_name)s();
+ if (!PyModule_Check(m)) {
+ PyModuleDef *mdef = (PyModuleDef *) m;
+ PyObject *modname = PyUnicode_FromString("__main__");
+ m = NULL;
+ if (modname) {
+ // FIXME: not currently calling PyModule_FromDefAndSpec() here because we do not have a module spec!
+ // FIXME: not currently setting __file__, __path__, __spec__, ...
+ m = PyModule_NewObject(modname);
+ Py_DECREF(modname);
+ if (m) PyModule_ExecDef(m, mdef);
+ }
+ }
#else
m = PyInit_%(module_name)s();
#endif
diff -Nru cython-0.26.1/Cython/Utility/Exceptions.c cython-0.29.14/Cython/Utility/Exceptions.c
--- cython-0.26.1/Cython/Utility/Exceptions.c 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Utility/Exceptions.c 2019-02-08 19:14:39.000000000 +0000
@@ -11,10 +11,12 @@
#if CYTHON_FAST_THREAD_STATE
#define __Pyx_PyThreadState_declare PyThreadState *$local_tstate_cname;
-#define __Pyx_PyThreadState_assign $local_tstate_cname = PyThreadState_GET();
+#define __Pyx_PyThreadState_assign $local_tstate_cname = __Pyx_PyThreadState_Current;
+#define __Pyx_PyErr_Occurred() $local_tstate_cname->curexc_type
#else
#define __Pyx_PyThreadState_declare
#define __Pyx_PyThreadState_assign
+#define __Pyx_PyErr_Occurred() PyErr_Occurred()
#endif
@@ -31,11 +33,28 @@
/////////////// PyErrExceptionMatches ///////////////
#if CYTHON_FAST_THREAD_STATE
+static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) {
+ Py_ssize_t i, n;
+ n = PyTuple_GET_SIZE(tuple);
+#if PY_MAJOR_VERSION >= 3
+ // the tighter subtype checking in Py3 allows faster out-of-order comparison
+ for (i=0; icurexc_type;
if (exc_type == err) return 1;
if (unlikely(!exc_type)) return 0;
- return PyErr_GivenExceptionMatches(exc_type, err);
+ if (unlikely(PyTuple_Check(err)))
+ return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err);
+ return __Pyx_PyErr_GivenExceptionMatches(exc_type, err);
}
#endif
@@ -44,6 +63,7 @@
//@requires: PyThreadStateGet
#if CYTHON_FAST_THREAD_STATE
+#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL)
#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb)
#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb)
#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState($local_tstate_cname, type, value, tb)
@@ -51,9 +71,19 @@
static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); /*proto*/
static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); /*proto*/
+#if CYTHON_COMPILING_IN_CPYTHON
+#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL))
#else
+#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)
+#endif
+
+#else
+#define __Pyx_PyErr_Clear() PyErr_Clear()
+#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)
#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb)
#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb)
+#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb)
+#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb)
#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb)
#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb)
#endif
@@ -230,11 +260,7 @@
goto bad;
}
-#if PY_VERSION_HEX >= 0x03030000
if (cause) {
-#else
- if (cause && cause != Py_None) {
-#endif
PyObject *fixed_cause;
if (cause == Py_None) {
// raise ... from None
@@ -265,7 +291,7 @@
PyErr_Restore(tmp_type, tmp_value, tb);
Py_XDECREF(tmp_tb);
#else
- PyThreadState *tstate = PyThreadState_GET();
+ PyThreadState *tstate = __Pyx_PyThreadState_Current;
PyObject* tmp_tb = tstate->curexc_traceback;
if (tb != tmp_tb) {
Py_INCREF(tb);
@@ -281,6 +307,31 @@
}
#endif
+
+/////////////// GetTopmostException.proto ///////////////
+
+#if CYTHON_USE_EXC_INFO_STACK
+static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate);
+#endif
+
+/////////////// GetTopmostException ///////////////
+
+#if CYTHON_USE_EXC_INFO_STACK
+// Copied from errors.c in CPython.
+static _PyErr_StackItem *
+__Pyx_PyErr_GetTopmostException(PyThreadState *tstate)
+{
+ _PyErr_StackItem *exc_info = tstate->exc_info;
+ while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) &&
+ exc_info->previous_item != NULL)
+ {
+ exc_info = exc_info->previous_item;
+ }
+ return exc_info;
+}
+#endif
+
+
/////////////// GetException.proto ///////////////
//@substitute: naming
//@requires: PyThreadStateGet
@@ -295,10 +346,11 @@
/////////////// GetException ///////////////
#if CYTHON_FAST_THREAD_STATE
-static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {
+static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb)
#else
-static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) {
+static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb)
#endif
+{
PyObject *local_type, *local_value, *local_tb;
#if CYTHON_FAST_THREAD_STATE
PyObject *tmp_type, *tmp_value, *tmp_tb;
@@ -333,12 +385,24 @@
*value = local_value;
*tb = local_tb;
#if CYTHON_FAST_THREAD_STATE
+ #if CYTHON_USE_EXC_INFO_STACK
+ {
+ _PyErr_StackItem *exc_info = tstate->exc_info;
+ tmp_type = exc_info->exc_type;
+ tmp_value = exc_info->exc_value;
+ tmp_tb = exc_info->exc_traceback;
+ exc_info->exc_type = local_type;
+ exc_info->exc_value = local_value;
+ exc_info->exc_traceback = local_tb;
+ }
+ #else
tmp_type = tstate->exc_type;
tmp_value = tstate->exc_value;
tmp_tb = tstate->exc_traceback;
tstate->exc_type = local_type;
tstate->exc_value = local_value;
tstate->exc_traceback = local_tb;
+ #endif
// Make sure tstate is in a consistent state when we XDECREF
// these objects (DECREF may run arbitrary code).
Py_XDECREF(tmp_type);
@@ -362,15 +426,23 @@
static CYTHON_INLINE void __Pyx_ReraiseException(void); /*proto*/
-/////////////// ReRaiseException.proto ///////////////
+/////////////// ReRaiseException ///////////////
+//@requires: GetTopmostException
static CYTHON_INLINE void __Pyx_ReraiseException(void) {
PyObject *type = NULL, *value = NULL, *tb = NULL;
#if CYTHON_FAST_THREAD_STATE
PyThreadState *tstate = PyThreadState_GET();
+ #if CYTHON_USE_EXC_INFO_STACK
+ _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate);
+ type = exc_info->exc_type;
+ value = exc_info->exc_value;
+ tb = exc_info->exc_traceback;
+ #else
type = tstate->exc_type;
value = tstate->exc_value;
tb = tstate->exc_traceback;
+ #endif
#else
PyErr_GetExcInfo(&type, &value, &tb);
#endif
@@ -411,12 +483,20 @@
#endif
/////////////// SaveResetException ///////////////
+//@requires: GetTopmostException
#if CYTHON_FAST_THREAD_STATE
static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {
+ #if CYTHON_USE_EXC_INFO_STACK
+ _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate);
+ *type = exc_info->exc_type;
+ *value = exc_info->exc_value;
+ *tb = exc_info->exc_traceback;
+ #else
*type = tstate->exc_type;
*value = tstate->exc_value;
*tb = tstate->exc_traceback;
+ #endif
Py_XINCREF(*type);
Py_XINCREF(*value);
Py_XINCREF(*tb);
@@ -424,12 +504,23 @@
static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) {
PyObject *tmp_type, *tmp_value, *tmp_tb;
+
+ #if CYTHON_USE_EXC_INFO_STACK
+ _PyErr_StackItem *exc_info = tstate->exc_info;
+ tmp_type = exc_info->exc_type;
+ tmp_value = exc_info->exc_value;
+ tmp_tb = exc_info->exc_traceback;
+ exc_info->exc_type = type;
+ exc_info->exc_value = value;
+ exc_info->exc_traceback = tb;
+ #else
tmp_type = tstate->exc_type;
tmp_value = tstate->exc_value;
tmp_tb = tstate->exc_traceback;
tstate->exc_type = type;
tstate->exc_value = value;
tstate->exc_traceback = tb;
+ #endif
Py_XDECREF(tmp_type);
Py_XDECREF(tmp_value);
Py_XDECREF(tmp_tb);
@@ -452,6 +543,17 @@
#if CYTHON_FAST_THREAD_STATE
static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {
PyObject *tmp_type, *tmp_value, *tmp_tb;
+
+ #if CYTHON_USE_EXC_INFO_STACK
+ _PyErr_StackItem *exc_info = tstate->exc_info;
+ tmp_type = exc_info->exc_type;
+ tmp_value = exc_info->exc_value;
+ tmp_tb = exc_info->exc_traceback;
+
+ exc_info->exc_type = *type;
+ exc_info->exc_value = *value;
+ exc_info->exc_traceback = *tb;
+ #else
tmp_type = tstate->exc_type;
tmp_value = tstate->exc_value;
tmp_tb = tstate->exc_traceback;
@@ -459,6 +561,7 @@
tstate->exc_type = *type;
tstate->exc_value = *value;
tstate->exc_traceback = *tb;
+ #endif
*type = tmp_type;
*value = tmp_value;
@@ -531,47 +634,62 @@
/////////////// CLineInTraceback.proto ///////////////
-static int __Pyx_CLineForTraceback(int c_line);
+#ifdef CYTHON_CLINE_IN_TRACEBACK /* 0 or 1 to disable/enable C line display in tracebacks at C compile time */
+#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0)
+#else
+static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line);/*proto*/
+#endif
/////////////// CLineInTraceback ///////////////
//@requires: ObjectHandling.c::PyObjectGetAttrStr
+//@requires: ObjectHandling.c::PyDictVersioning
+//@requires: PyErrFetchRestore
//@substitute: naming
-static int __Pyx_CLineForTraceback(int c_line) {
-#ifdef CYTHON_CLINE_IN_TRACEBACK /* 0 or 1 to disable/enable C line display in tracebacks at C compile time */
- return ((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0;
-#else
+#ifndef CYTHON_CLINE_IN_TRACEBACK
+static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line) {
PyObject *use_cline;
+ PyObject *ptype, *pvalue, *ptraceback;
+#if CYTHON_COMPILING_IN_CPYTHON
+ PyObject **cython_runtime_dict;
+#endif
+
+ if (unlikely(!${cython_runtime_cname})) {
+ // Very early error where the runtime module is not set up yet.
+ return c_line;
+ }
+
+ __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback);
#if CYTHON_COMPILING_IN_CPYTHON
- PyObject **cython_runtime_dict = _PyObject_GetDictPtr(${cython_runtime_cname});
+ cython_runtime_dict = _PyObject_GetDictPtr(${cython_runtime_cname});
if (likely(cython_runtime_dict)) {
- use_cline = PyDict_GetItem(*cython_runtime_dict, PYIDENT("cline_in_traceback"));
+ __PYX_PY_DICT_LOOKUP_IF_MODIFIED(
+ use_cline, *cython_runtime_dict,
+ __Pyx_PyDict_GetItemStr(*cython_runtime_dict, PYIDENT("cline_in_traceback")))
} else
#endif
{
- PyObject *ptype, *pvalue, *ptraceback;
- PyObject *use_cline_obj;
- PyErr_Fetch(&ptype, &pvalue, &ptraceback);
- use_cline_obj = __Pyx_PyObject_GetAttrStr(${cython_runtime_cname}, PYIDENT("cline_in_traceback"));
+ PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStr(${cython_runtime_cname}, PYIDENT("cline_in_traceback"));
if (use_cline_obj) {
use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True;
Py_DECREF(use_cline_obj);
} else {
+ PyErr_Clear();
use_cline = NULL;
}
- PyErr_Restore(ptype, pvalue, ptraceback);
}
if (!use_cline) {
c_line = 0;
PyObject_SetAttr(${cython_runtime_cname}, PYIDENT("cline_in_traceback"), Py_False);
}
- else if (PyObject_Not(use_cline) != 0) {
+ else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) {
c_line = 0;
}
+ __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback);
return c_line;
-#endif
}
+#endif
/////////////// AddTraceback.proto ///////////////
@@ -645,9 +763,10 @@
int py_line, const char *filename) {
PyCodeObject *py_code = 0;
PyFrameObject *py_frame = 0;
+ PyThreadState *tstate = __Pyx_PyThreadState_Current;
if (c_line) {
- c_line = __Pyx_CLineForTraceback(c_line);
+ c_line = __Pyx_CLineForTraceback(tstate, c_line);
}
// Negate to avoid collisions between py and c lines.
@@ -659,10 +778,10 @@
$global_code_object_cache_insert(c_line ? -c_line : py_line, py_code);
}
py_frame = PyFrame_New(
- PyThreadState_GET(), /*PyThreadState *tstate,*/
- py_code, /*PyCodeObject *code,*/
- $moddict_cname, /*PyObject *globals,*/
- 0 /*PyObject *locals*/
+ tstate, /*PyThreadState *tstate,*/
+ py_code, /*PyCodeObject *code,*/
+ $moddict_cname, /*PyObject *globals,*/
+ 0 /*PyObject *locals*/
);
if (!py_frame) goto bad;
__Pyx_PyFrame_SetLineNumber(py_frame, py_line);
diff -Nru cython-0.26.1/Cython/Utility/ExtensionTypes.c cython-0.29.14/Cython/Utility/ExtensionTypes.c
--- cython-0.26.1/Cython/Utility/ExtensionTypes.c 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Utility/ExtensionTypes.c 2018-09-22 14:18:56.000000000 +0000
@@ -1,3 +1,78 @@
+/////////////// PyType_Ready.proto ///////////////
+
+static int __Pyx_PyType_Ready(PyTypeObject *t);
+
+/////////////// PyType_Ready ///////////////
+
+// Wrapper around PyType_Ready() with some runtime checks and fixes
+// to deal with multiple inheritance.
+static int __Pyx_PyType_Ready(PyTypeObject *t) {
+ // Loop over all bases (except the first) and check that those
+ // really are heap types. Otherwise, it would not be safe to
+ // subclass them.
+ //
+ // We also check tp_dictoffset: it is unsafe to inherit
+ // tp_dictoffset from a base class because the object structures
+ // would not be compatible. So, if our extension type doesn't set
+ // tp_dictoffset (i.e. there is no __dict__ attribute in the object
+ // structure), we need to check that none of the base classes sets
+ // it either.
+ int r;
+ PyObject *bases = t->tp_bases;
+ if (bases)
+ {
+ Py_ssize_t i, n = PyTuple_GET_SIZE(bases);
+ for (i = 1; i < n; i++) /* Skip first base */
+ {
+ PyObject *b0 = PyTuple_GET_ITEM(bases, i);
+ PyTypeObject *b;
+#if PY_MAJOR_VERSION < 3
+ /* Disallow old-style classes */
+ if (PyClass_Check(b0))
+ {
+ PyErr_Format(PyExc_TypeError, "base class '%.200s' is an old-style class",
+ PyString_AS_STRING(((PyClassObject*)b0)->cl_name));
+ return -1;
+ }
+#endif
+ b = (PyTypeObject*)b0;
+ if (!PyType_HasFeature(b, Py_TPFLAGS_HEAPTYPE))
+ {
+ PyErr_Format(PyExc_TypeError, "base class '%.200s' is not a heap type",
+ b->tp_name);
+ return -1;
+ }
+ if (t->tp_dictoffset == 0 && b->tp_dictoffset)
+ {
+ PyErr_Format(PyExc_TypeError,
+ "extension type '%.200s' has no __dict__ slot, but base type '%.200s' has: "
+ "either add 'cdef dict __dict__' to the extension type "
+ "or add '__slots__ = [...]' to the base type",
+ t->tp_name, b->tp_name);
+ return -1;
+ }
+ }
+ }
+
+#if PY_VERSION_HEX >= 0x03050000
+ // As of https://bugs.python.org/issue22079
+ // PyType_Ready enforces that all bases of a non-heap type are
+ // non-heap. We know that this is the case for the solid base but
+ // other bases are heap allocated and are kept alive through the
+ // tp_bases reference.
+ // Other than this check, the Py_TPFLAGS_HEAPTYPE flag is unused
+ // in PyType_Ready().
+ t->tp_flags |= Py_TPFLAGS_HEAPTYPE;
+#endif
+
+ r = PyType_Ready(t);
+
+#if PY_VERSION_HEX >= 0x03050000
+ t->tp_flags &= ~Py_TPFLAGS_HEAPTYPE;
+#endif
+
+ return r;
+}
/////////////// CallNextTpDealloc.proto ///////////////
@@ -134,7 +209,7 @@
PyErr_Format(PyExc_RuntimeError, "Unable to initialize pickling for %s", ((PyTypeObject*)type_obj)->tp_name);
ret = -1;
GOOD:
-#if !CYTHON_COMPILING_IN_CPYTHON
+#if !CYTHON_USE_PYTYPE_LOOKUP
Py_XDECREF(object_reduce);
Py_XDECREF(object_reduce_ex);
#endif
diff -Nru cython-0.26.1/Cython/Utility/FunctionArguments.c cython-0.29.14/Cython/Utility/FunctionArguments.c
--- cython-0.26.1/Cython/Utility/FunctionArguments.c 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Cython/Utility/FunctionArguments.c 2018-09-22 14:18:56.000000000 +0000
@@ -1,34 +1,31 @@
//////////////////// ArgTypeTest.proto ////////////////////
-static CYTHON_INLINE int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed,
- const char *name, int exact); /*proto*/
-//////////////////// ArgTypeTest ////////////////////
+#define __Pyx_ArgTypeTest(obj, type, none_allowed, name, exact) \
+ ((likely((Py_TYPE(obj) == type) | (none_allowed && (obj == Py_None)))) ? 1 : \
+ __Pyx__ArgTypeTest(obj, type, name, exact))
-static void __Pyx_RaiseArgumentTypeInvalid(const char* name, PyObject *obj, PyTypeObject *type) {
- PyErr_Format(PyExc_TypeError,
- "Argument '%.200s' has incorrect type (expected %.200s, got %.200s)",
- name, type->tp_name, Py_TYPE(obj)->tp_name);
-}
+static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact); /*proto*/
+
+//////////////////// ArgTypeTest ////////////////////
-static CYTHON_INLINE int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed,
- const char *name, int exact)
+static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact)
{
if (unlikely(!type)) {
PyErr_SetString(PyExc_SystemError, "Missing type object");
return 0;
}
- if (none_allowed && obj == Py_None) return 1;
else if (exact) {
- if (likely(Py_TYPE(obj) == type)) return 1;
#if PY_MAJOR_VERSION == 2
- else if ((type == &PyBaseString_Type) && likely(__Pyx_PyBaseString_CheckExact(obj))) return 1;
+ if ((type == &PyBaseString_Type) && likely(__Pyx_PyBaseString_CheckExact(obj))) return 1;
#endif
}
else {
- if (likely(PyObject_TypeCheck(obj, type))) return 1;
+ if (likely(__Pyx_TypeCheck(obj, type))) return 1;
}
- __Pyx_RaiseArgumentTypeInvalid(name, obj, type);
+ PyErr_Format(PyExc_TypeError,
+ "Argument '%.200s' has incorrect type (expected %.200s, got %.200s)",
+ name, type->tp_name, Py_TYPE(obj)->tp_name);
return 0;
}
@@ -72,14 +69,11 @@
//////////////////// RaiseKeywordRequired.proto ////////////////////
-static CYTHON_INLINE void __Pyx_RaiseKeywordRequired(const char* func_name, PyObject* kw_name); /*proto*/
+static void __Pyx_RaiseKeywordRequired(const char* func_name, PyObject* kw_name); /*proto*/
//////////////////// RaiseKeywordRequired ////////////////////
-static CYTHON_INLINE void __Pyx_RaiseKeywordRequired(
- const char* func_name,
- PyObject* kw_name)
-{
+static void __Pyx_RaiseKeywordRequired(const char* func_name, PyObject* kw_name) {
PyErr_Format(PyExc_TypeError,
#if PY_MAJOR_VERSION >= 3
"%s() needs keyword-only argument %U", func_name, kw_name);
@@ -123,7 +117,7 @@
//////////////////// KeywordStringCheck.proto ////////////////////
-static CYTHON_INLINE int __Pyx_CheckKeywordStrings(PyObject *kwdict, const char* function_name, int kw_allowed); /*proto*/
+static int __Pyx_CheckKeywordStrings(PyObject *kwdict, const char* function_name, int kw_allowed); /*proto*/
//////////////////// KeywordStringCheck ////////////////////
@@ -131,7 +125,7 @@
// were passed to a function, or if any keywords were passed to a
// function that does not accept them.
-static CYTHON_INLINE int __Pyx_CheckKeywordStrings(
+static int __Pyx_CheckKeywordStrings(
PyObject *kwdict,
const char* function_name,
int kw_allowed)
@@ -146,7 +140,7 @@
#else
while (PyDict_Next(kwdict, &pos, &key, 0)) {
#if PY_MAJOR_VERSION < 3
- if (unlikely(!PyString_CheckExact(key)) && unlikely(!PyString_Check(key)))
+ if (unlikely(!PyString_Check(key)))
#endif
if (unlikely(!PyUnicode_Check(key)))
goto invalid_keyword_type;
diff -Nru cython-0.26.1/Cython/Utility/ImportExport.c cython-0.29.14/Cython/Utility/ImportExport.c
--- cython-0.26.1/Cython/Utility/ImportExport.c 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Utility/ImportExport.c 2018-11-24 09:20:06.000000000 +0000
@@ -23,7 +23,7 @@
PyObject *global_dict = 0;
PyObject *empty_dict = 0;
PyObject *list;
- #if PY_VERSION_HEX < 0x03030000
+ #if PY_MAJOR_VERSION < 3
PyObject *py_import;
py_import = __Pyx_PyObject_GetAttrStr($builtins_cname, PYIDENT("__import__"));
if (!py_import)
@@ -48,17 +48,8 @@
if (level == -1) {
if (strchr(__Pyx_MODULE_NAME, '.')) {
/* try package relative import first */
- #if PY_VERSION_HEX < 0x03030000
- PyObject *py_level = PyInt_FromLong(1);
- if (!py_level)
- goto bad;
- module = PyObject_CallFunctionObjArgs(py_import,
- name, global_dict, empty_dict, list, py_level, NULL);
- Py_DECREF(py_level);
- #else
module = PyImport_ImportModuleLevelObject(
name, global_dict, empty_dict, list, 1);
- #endif
if (!module) {
if (!PyErr_ExceptionMatches(PyExc_ImportError))
goto bad;
@@ -69,12 +60,12 @@
}
#endif
if (!module) {
- #if PY_VERSION_HEX < 0x03030000
+ #if PY_MAJOR_VERSION < 3
PyObject *py_level = PyInt_FromLong(level);
if (!py_level)
goto bad;
module = PyObject_CallFunctionObjArgs(py_import,
- name, global_dict, empty_dict, list, py_level, NULL);
+ name, global_dict, empty_dict, list, py_level, (PyObject *)NULL);
Py_DECREF(py_level);
#else
module = PyImport_ImportModuleLevelObject(
@@ -83,7 +74,7 @@
}
}
bad:
- #if PY_VERSION_HEX < 0x03030000
+ #if PY_MAJOR_VERSION < 3
Py_XDECREF(py_import);
#endif
Py_XDECREF(empty_list);
@@ -231,35 +222,10 @@
}
-/////////////// ModuleImport.proto ///////////////
-
-static PyObject *__Pyx_ImportModule(const char *name); /*proto*/
-
-/////////////// ModuleImport ///////////////
-//@requires: PyIdentifierFromString
-
-#ifndef __PYX_HAVE_RT_ImportModule
-#define __PYX_HAVE_RT_ImportModule
-static PyObject *__Pyx_ImportModule(const char *name) {
- PyObject *py_name = 0;
- PyObject *py_module = 0;
-
- py_name = __Pyx_PyIdentifier_FromString(name);
- if (!py_name)
- goto bad;
- py_module = PyImport_Import(py_name);
- Py_DECREF(py_name);
- return py_module;
-bad:
- Py_XDECREF(py_name);
- return 0;
-}
-#endif
-
-
/////////////// SetPackagePathFromImportLib.proto ///////////////
-#if PY_VERSION_HEX >= 0x03030000
+// PY_VERSION_HEX >= 0x03030000
+#if PY_MAJOR_VERSION >= 3 && !CYTHON_PEP489_MULTI_PHASE_INIT
static int __Pyx_SetPackagePathFromImportLib(const char* parent_package_name, PyObject *module_name);
#else
#define __Pyx_SetPackagePathFromImportLib(a, b) 0
@@ -269,7 +235,8 @@
//@requires: ObjectHandling.c::PyObjectGetAttrStr
//@substitute: naming
-#if PY_VERSION_HEX >= 0x03030000
+// PY_VERSION_HEX >= 0x03030000
+#if PY_MAJOR_VERSION >= 3 && !CYTHON_PEP489_MULTI_PHASE_INIT
static int __Pyx_SetPackagePathFromImportLib(const char* parent_package_name, PyObject *module_name) {
PyObject *importlib, *loader, *osmod, *ossep, *parts, *package_path;
PyObject *path = NULL, *file_path = NULL;
@@ -341,37 +308,34 @@
/////////////// TypeImport.proto ///////////////
-static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, size_t size, int strict); /*proto*/
+#ifndef __PYX_HAVE_RT_ImportType_proto
+#define __PYX_HAVE_RT_ImportType_proto
+
+enum __Pyx_ImportType_CheckSize {
+ __Pyx_ImportType_CheckSize_Error = 0,
+ __Pyx_ImportType_CheckSize_Warn = 1,
+ __Pyx_ImportType_CheckSize_Ignore = 2
+};
+
+static PyTypeObject *__Pyx_ImportType(PyObject* module, const char *module_name, const char *class_name, size_t size, enum __Pyx_ImportType_CheckSize check_size); /*proto*/
+
+#endif
/////////////// TypeImport ///////////////
-//@requires: PyIdentifierFromString
-//@requires: ModuleImport
#ifndef __PYX_HAVE_RT_ImportType
#define __PYX_HAVE_RT_ImportType
-static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name,
- size_t size, int strict)
+static PyTypeObject *__Pyx_ImportType(PyObject *module, const char *module_name, const char *class_name,
+ size_t size, enum __Pyx_ImportType_CheckSize check_size)
{
- PyObject *py_module = 0;
PyObject *result = 0;
- PyObject *py_name = 0;
char warning[200];
Py_ssize_t basicsize;
#ifdef Py_LIMITED_API
PyObject *py_basicsize;
#endif
- py_module = __Pyx_ImportModule(module_name);
- if (!py_module)
- goto bad;
- py_name = __Pyx_PyIdentifier_FromString(class_name);
- if (!py_name)
- goto bad;
- result = PyObject_GetAttr(py_module, py_name);
- Py_DECREF(py_name);
- py_name = 0;
- Py_DECREF(py_module);
- py_module = 0;
+ result = PyObject_GetAttrString(module, class_name);
if (!result)
goto bad;
if (!PyType_Check(result)) {
@@ -392,21 +356,30 @@
if (basicsize == (Py_ssize_t)-1 && PyErr_Occurred())
goto bad;
#endif
- if (!strict && (size_t)basicsize > size) {
- PyOS_snprintf(warning, sizeof(warning),
- "%s.%s size changed, may indicate binary incompatibility. Expected %zd, got %zd",
- module_name, class_name, basicsize, size);
- if (PyErr_WarnEx(NULL, warning, 0) < 0) goto bad;
+ if ((size_t)basicsize < size) {
+ PyErr_Format(PyExc_ValueError,
+ "%.200s.%.200s size changed, may indicate binary incompatibility. "
+ "Expected %zd from C header, got %zd from PyObject",
+ module_name, class_name, size, basicsize);
+ goto bad;
}
- else if ((size_t)basicsize != size) {
+ if (check_size == __Pyx_ImportType_CheckSize_Error && (size_t)basicsize != size) {
PyErr_Format(PyExc_ValueError,
- "%.200s.%.200s has the wrong size, try recompiling. Expected %zd, got %zd",
- module_name, class_name, basicsize, size);
+ "%.200s.%.200s size changed, may indicate binary incompatibility. "
+ "Expected %zd from C header, got %zd from PyObject",
+ module_name, class_name, size, basicsize);
goto bad;
}
+ else if (check_size == __Pyx_ImportType_CheckSize_Warn && (size_t)basicsize > size) {
+ PyOS_snprintf(warning, sizeof(warning),
+ "%s.%s size changed, may indicate binary incompatibility. "
+ "Expected %zd from C header, got %zd from PyObject",
+ module_name, class_name, size, basicsize);
+ if (PyErr_WarnEx(NULL, warning, 0) < 0) goto bad;
+ }
+ /* check_size == __Pyx_ImportType_CheckSize_Ignore does not warn nor error */
return (PyTypeObject *)result;
bad:
- Py_XDECREF(py_module);
Py_XDECREF(result);
return NULL;
}
@@ -663,6 +636,67 @@
}
+/////////////// MergeVTables.proto ///////////////
+//@requires: GetVTable
+
+static int __Pyx_MergeVtables(PyTypeObject *type); /*proto*/
+
+/////////////// MergeVTables ///////////////
+
+static int __Pyx_MergeVtables(PyTypeObject *type) {
+ int i;
+ void** base_vtables;
+ void* unknown = (void*)-1;
+ PyObject* bases = type->tp_bases;
+ int base_depth = 0;
+ {
+ PyTypeObject* base = type->tp_base;
+ while (base) {
+ base_depth += 1;
+ base = base->tp_base;
+ }
+ }
+ base_vtables = (void**) malloc(sizeof(void*) * (base_depth + 1));
+ base_vtables[0] = unknown;
+ // Could do MRO resolution of individual methods in the future, assuming
+ // compatible vtables, but for now simply require a common vtable base.
+ // Note that if the vtables of various bases are extended separately,
+ // resolution isn't possible and we must reject it just as when the
+ // instance struct is so extended. (It would be good to also do this
+ // check when a multiple-base class is created in pure Python as well.)
+ for (i = 1; i < PyTuple_GET_SIZE(bases); i++) {
+ void* base_vtable = __Pyx_GetVtable(((PyTypeObject*)PyTuple_GET_ITEM(bases, i))->tp_dict);
+ if (base_vtable != NULL) {
+ int j;
+ PyTypeObject* base = type->tp_base;
+ for (j = 0; j < base_depth; j++) {
+ if (base_vtables[j] == unknown) {
+ base_vtables[j] = __Pyx_GetVtable(base->tp_dict);
+ base_vtables[j + 1] = unknown;
+ }
+ if (base_vtables[j] == base_vtable) {
+ break;
+ } else if (base_vtables[j] == NULL) {
+ // No more potential matching bases (with vtables).
+ goto bad;
+ }
+ base = base->tp_base;
+ }
+ }
+ }
+ PyErr_Clear();
+ free(base_vtables);
+ return 0;
+bad:
+ PyErr_Format(
+ PyExc_TypeError,
+ "multiple bases have vtable conflict: '%s' and '%s'",
+ type->tp_base->tp_name, ((PyTypeObject*)PyTuple_GET_ITEM(bases, i))->tp_name);
+ free(base_vtables);
+ return -1;
+}
+
+
/////////////// ImportNumPyArray.proto ///////////////
static PyObject *__pyx_numpy_ndarray = NULL;
diff -Nru cython-0.26.1/Cython/Utility/MemoryView_C.c cython-0.29.14/Cython/Utility/MemoryView_C.c
--- cython-0.26.1/Cython/Utility/MemoryView_C.c 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Cython/Utility/MemoryView_C.c 2019-02-08 19:14:39.000000000 +0000
@@ -1,4 +1,5 @@
////////// MemviewSliceStruct.proto //////////
+//@proto_block: utility_code_proto_before_types
/* memoryview slice struct */
struct {{memview_struct_name}};
@@ -11,8 +12,12 @@
Py_ssize_t suboffsets[{{max_dims}}];
} {{memviewslice_name}};
+// used for "len(memviewslice)"
+#define __Pyx_MemoryView_Len(m) (m.shape[0])
+
/////////// Atomics.proto /////////////
+//@proto_block: utility_code_proto_before_types
#include
@@ -77,7 +82,7 @@
/////////////// ObjectToMemviewSlice.proto ///////////////
-static CYTHON_INLINE {{memviewslice_name}} {{funcname}}(PyObject *);
+static CYTHON_INLINE {{memviewslice_name}} {{funcname}}(PyObject *, int writable_flag);
////////// MemviewSliceInit.proto //////////
@@ -122,7 +127,7 @@
/////////////// ObjectToMemviewSlice ///////////////
//@requires: MemviewSliceValidateAndInit
-static CYTHON_INLINE {{memviewslice_name}} {{funcname}}(PyObject *obj) {
+static CYTHON_INLINE {{memviewslice_name}} {{funcname}}(PyObject *obj, int writable_flag) {
{{memviewslice_name}} result = {{memslice_init}};
__Pyx_BufFmt_StackElem stack[{{struct_nesting_depth}}];
int axes_specs[] = { {{axes_specs}} };
@@ -135,7 +140,7 @@
}
retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, {{c_or_f_flag}},
- {{buf_flag}}, {{ndim}},
+ {{buf_flag}} | writable_flag, {{ndim}},
&{{dtype_typeinfo}}, stack,
&result, obj);
@@ -164,6 +169,8 @@
/////////////// MemviewSliceValidateAndInit ///////////////
//@requires: Buffer.c::TypeInfoCompare
+//@requires: Buffer.c::BufferFormatStructs
+//@requires: Buffer.c::BufferFormatCheck
static int
__pyx_check_strides(Py_buffer *buf, int dim, int ndim, int spec)
@@ -237,7 +244,7 @@
}
if (spec & __Pyx_MEMVIEW_PTR) {
- if (!buf->suboffsets || (buf->suboffsets && buf->suboffsets[dim] < 0)) {
+ if (!buf->suboffsets || (buf->suboffsets[dim] < 0)) {
PyErr_Format(PyExc_ValueError,
"Buffer is not indirectly accessible "
"in dimension %d.", dim);
@@ -387,11 +394,7 @@
Py_buffer *buf = &memview->view;
__Pyx_RefNannySetupContext("init_memviewslice", 0);
- if (!buf) {
- PyErr_SetString(PyExc_ValueError,
- "buf is NULL.");
- goto fail;
- } else if (memviewslice->memview || memviewslice->data) {
+ if (memviewslice->memview || memviewslice->data) {
PyErr_SetString(PyExc_ValueError,
"memviewslice is already initialized!");
goto fail;
@@ -437,8 +440,12 @@
return retval;
}
+#ifndef Py_NO_RETURN
+// available since Py3.3
+#define Py_NO_RETURN
+#endif
-static CYTHON_INLINE void __pyx_fatalerror(const char *fmt, ...) {
+static void __pyx_fatalerror(const char *fmt, ...) Py_NO_RETURN {
va_list vargs;
char msg[200];
@@ -447,11 +454,10 @@
#else
va_start(vargs);
#endif
-
vsnprintf(msg, 200, fmt, vargs);
- Py_FatalError(msg);
-
va_end(vargs);
+
+ Py_FatalError(msg);
}
static CYTHON_INLINE int
@@ -689,29 +695,21 @@
}
-////////// MemviewSliceIsCContig.proto //////////
-
-#define __pyx_memviewslice_is_c_contig{{ndim}}(slice) \
- __pyx_memviewslice_is_contig(slice, 'C', {{ndim}})
-
-
-////////// MemviewSliceIsFContig.proto //////////
+////////// MemviewSliceCheckContig.proto //////////
-#define __pyx_memviewslice_is_f_contig{{ndim}}(slice) \
- __pyx_memviewslice_is_contig(slice, 'F', {{ndim}})
+#define __pyx_memviewslice_is_contig_{{contig_type}}{{ndim}}(slice) \
+ __pyx_memviewslice_is_contig(slice, '{{contig_type}}', {{ndim}})
////////// MemviewSliceIsContig.proto //////////
-static int __pyx_memviewslice_is_contig(const {{memviewslice_name}} mvs,
- char order, int ndim);
+static int __pyx_memviewslice_is_contig(const {{memviewslice_name}} mvs, char order, int ndim);/*proto*/
////////// MemviewSliceIsContig //////////
static int
-__pyx_memviewslice_is_contig(const {{memviewslice_name}} mvs,
- char order, int ndim)
+__pyx_memviewslice_is_contig(const {{memviewslice_name}} mvs, char order, int ndim)
{
int i, index, step, start;
Py_ssize_t itemsize = mvs.memview->view.itemsize;
@@ -850,28 +848,40 @@
{
Py_ssize_t __pyx_tmp_idx = {{idx}};
- Py_ssize_t __pyx_tmp_shape = {{src}}.shape[{{dim}}];
- Py_ssize_t __pyx_tmp_stride = {{src}}.strides[{{dim}}];
- if ({{wraparound}} && (__pyx_tmp_idx < 0))
- __pyx_tmp_idx += __pyx_tmp_shape;
- if ({{boundscheck}} && (__pyx_tmp_idx < 0 || __pyx_tmp_idx >= __pyx_tmp_shape)) {
- {{if not have_gil}}
- #ifdef WITH_THREAD
- PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();
- #endif
- {{endif}}
-
- PyErr_SetString(PyExc_IndexError, "Index out of bounds (axis {{dim}})");
-
- {{if not have_gil}}
- #ifdef WITH_THREAD
- PyGILState_Release(__pyx_gilstate_save);
- #endif
- {{endif}}
+ {{if wraparound or boundscheck}}
+ Py_ssize_t __pyx_tmp_shape = {{src}}.shape[{{dim}}];
+ {{endif}}
- {{error_goto}}
- }
+ Py_ssize_t __pyx_tmp_stride = {{src}}.strides[{{dim}}];
+ {{if wraparound}}
+ if (__pyx_tmp_idx < 0)
+ __pyx_tmp_idx += __pyx_tmp_shape;
+ {{endif}}
+
+ {{if boundscheck}}
+ if (!__Pyx_is_valid_index(__pyx_tmp_idx, __pyx_tmp_shape)) {
+ {{if not have_gil}}
+ #ifdef WITH_THREAD
+ PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();
+ #endif
+ {{endif}}
+
+ PyErr_SetString(PyExc_IndexError,
+ "Index out of bounds (axis {{dim}})");
+
+ {{if not have_gil}}
+ #ifdef WITH_THREAD
+ PyGILState_Release(__pyx_gilstate_save);
+ #endif
+ {{endif}}
+
+ {{error_goto}}
+ }
+ {{else}}
+ // make sure label is not un-used
+ if ((0)) {{error_goto}}
+ {{endif}}
{{if all_dimensions_direct}}
{{dst}}.data += __pyx_tmp_idx * __pyx_tmp_stride;
diff -Nru cython-0.26.1/Cython/Utility/MemoryView.pyx cython-0.29.14/Cython/Utility/MemoryView.pyx
--- cython-0.26.1/Cython/Utility/MemoryView.pyx 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Utility/MemoryView.pyx 2019-11-01 16:25:41.000000000 +0000
@@ -64,7 +64,9 @@
PyBUF_WRITABLE
PyBUF_STRIDES
PyBUF_INDIRECT
+ PyBUF_ND
PyBUF_RECORDS
+ PyBUF_RECORDS_RO
ctypedef struct __Pyx_TypeInfo:
pass
@@ -370,6 +372,10 @@
def __dealloc__(memoryview self):
if self.obj is not None:
__Pyx_ReleaseBuffer(&self.view)
+ elif (<__pyx_buffer *> &self.view).obj == Py_None:
+ # Undo the incref in __cinit__() above.
+ (<__pyx_buffer *> &self.view).obj = NULL
+ Py_DECREF(Py_None)
cdef int i
global __pyx_memoryview_thread_locks_used
@@ -408,6 +414,9 @@
return self.convert_item_to_object(itemp)
def __setitem__(memoryview self, object index, object value):
+ if self.view.readonly:
+ raise TypeError("Cannot assign to read-only memoryview")
+
have_slices, index = _unellipsify(index, self.view.ndim)
if have_slices:
@@ -422,7 +431,7 @@
cdef is_slice(self, obj):
if not isinstance(obj, memoryview):
try:
- obj = memoryview(obj, self.flags|PyBUF_ANY_CONTIGUOUS,
+ obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS,
self.dtype_is_object)
except TypeError:
return None
@@ -507,7 +516,10 @@
@cname('getbuffer')
def __getbuffer__(self, Py_buffer *info, int flags):
- if flags & PyBUF_STRIDES:
+ if flags & PyBUF_WRITABLE and self.view.readonly:
+ raise ValueError("Cannot create writable memory view from read-only memoryview")
+
+ if flags & PyBUF_ND:
info.shape = self.view.shape
else:
info.shape = NULL
@@ -531,12 +543,12 @@
info.ndim = self.view.ndim
info.itemsize = self.view.itemsize
info.len = self.view.len
- info.readonly = 0
+ info.readonly = self.view.readonly
info.obj = self
__pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)")
- # Some properties that have the same sematics as in NumPy
+ # Some properties that have the same semantics as in NumPy
@property
def T(self):
cdef _memoryviewslice result = memoryview_copy(self)
@@ -1012,7 +1024,10 @@
(<__pyx_buffer *> &result.view).obj = Py_None
Py_INCREF(Py_None)
- result.flags = PyBUF_RECORDS
+ if (memviewslice.memview).flags & PyBUF_WRITABLE:
+ result.flags = PyBUF_RECORDS
+ else:
+ result.flags = PyBUF_RECORDS_RO
result.view.shape = result.from_slice.shape
result.view.strides = result.from_slice.strides
@@ -1035,7 +1050,7 @@
@cname('__pyx_memoryview_get_slice_from_memoryview')
cdef {{memviewslice_name}} *get_slice_from_memview(memoryview memview,
- {{memviewslice_name}} *mslice):
+ {{memviewslice_name}} *mslice) except NULL:
cdef _memoryviewslice obj
if isinstance(memview, _memoryviewslice):
obj = memview
@@ -1161,11 +1176,10 @@
@cname('__pyx_memoryview_slice_get_size')
cdef Py_ssize_t slice_get_size({{memviewslice_name}} *src, int ndim) nogil:
"Return the size of the memory occupied by the slice in number of bytes"
- cdef int i
- cdef Py_ssize_t size = src.memview.view.itemsize
+ cdef Py_ssize_t shape, size = src.memview.view.itemsize
- for i in range(ndim):
- size *= src.shape[i]
+ for shape in src.shape[:ndim]:
+ size *= shape
return size
@@ -1182,11 +1196,11 @@
if order == 'F':
for idx in range(ndim):
strides[idx] = stride
- stride = stride * shape[idx]
+ stride *= shape[idx]
else:
for idx in range(ndim - 1, -1, -1):
strides[idx] = stride
- stride = stride * shape[idx]
+ stride *= shape[idx]
return stride
@@ -1340,7 +1354,7 @@
mslice.suboffsets[i] = -1
#
-### Take care of refcounting the objects in slices. Do this seperately from any copying,
+### Take care of refcounting the objects in slices. Do this separately from any copying,
### to minimize acquiring the GIL
#
diff -Nru cython-0.26.1/Cython/Utility/ModuleSetupCode.c cython-0.29.14/Cython/Utility/ModuleSetupCode.c
--- cython-0.26.1/Cython/Utility/ModuleSetupCode.c 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Utility/ModuleSetupCode.c 2019-07-07 08:37:19.000000000 +0000
@@ -29,7 +29,7 @@
#ifndef HAVE_LONG_LONG
// CPython has required PY_LONG_LONG support for years, even if HAVE_LONG_LONG is not defined for us
- #if PY_VERSION_HEX >= 0x03030000 || (PY_MAJOR_VERSION == 2 && PY_VERSION_HEX >= 0x02070000)
+ #if PY_VERSION_HEX >= 0x02070000
#define HAVE_LONG_LONG
#endif
#endif
@@ -51,8 +51,12 @@
#define CYTHON_USE_TYPE_SLOTS 0
#undef CYTHON_USE_PYTYPE_LOOKUP
#define CYTHON_USE_PYTYPE_LOOKUP 0
- #undef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 0
+ #if PY_VERSION_HEX < 0x03050000
+ #undef CYTHON_USE_ASYNC_SLOTS
+ #define CYTHON_USE_ASYNC_SLOTS 0
+ #elif !defined(CYTHON_USE_ASYNC_SLOTS)
+ #define CYTHON_USE_ASYNC_SLOTS 1
+ #endif
#undef CYTHON_USE_PYLIST_INTERNALS
#define CYTHON_USE_PYLIST_INTERNALS 0
#undef CYTHON_USE_UNICODE_INTERNALS
@@ -71,6 +75,14 @@
#define CYTHON_FAST_THREAD_STATE 0
#undef CYTHON_FAST_PYCALL
#define CYTHON_FAST_PYCALL 0
+ #undef CYTHON_PEP489_MULTI_PHASE_INIT
+ #define CYTHON_PEP489_MULTI_PHASE_INIT 0
+ #undef CYTHON_USE_TP_FINALIZE
+ #define CYTHON_USE_TP_FINALIZE 0
+ #undef CYTHON_USE_DICT_VERSIONS
+ #define CYTHON_USE_DICT_VERSIONS 0
+ #undef CYTHON_USE_EXC_INFO_STACK
+ #define CYTHON_USE_EXC_INFO_STACK 0
#elif defined(PYSTON_VERSION)
#define CYTHON_COMPILING_IN_PYPY 0
@@ -106,6 +118,14 @@
#define CYTHON_FAST_THREAD_STATE 0
#undef CYTHON_FAST_PYCALL
#define CYTHON_FAST_PYCALL 0
+ #undef CYTHON_PEP489_MULTI_PHASE_INIT
+ #define CYTHON_PEP489_MULTI_PHASE_INIT 0
+ #undef CYTHON_USE_TP_FINALIZE
+ #define CYTHON_USE_TP_FINALIZE 0
+ #undef CYTHON_USE_DICT_VERSIONS
+ #define CYTHON_USE_DICT_VERSIONS 0
+ #undef CYTHON_USE_EXC_INFO_STACK
+ #define CYTHON_USE_EXC_INFO_STACK 0
#else
#define CYTHON_COMPILING_IN_PYPY 0
@@ -161,6 +181,18 @@
#ifndef CYTHON_FAST_PYCALL
#define CYTHON_FAST_PYCALL 1
#endif
+ #ifndef CYTHON_PEP489_MULTI_PHASE_INIT
+ #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000)
+ #endif
+ #ifndef CYTHON_USE_TP_FINALIZE
+ #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1)
+ #endif
+ #ifndef CYTHON_USE_DICT_VERSIONS
+ #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX >= 0x030600B1)
+ #endif
+ #ifndef CYTHON_USE_EXC_INFO_STACK
+ #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3)
+ #endif
#endif
#if !defined(CYTHON_FAST_PYCCALL)
@@ -173,8 +205,168 @@
#undef SHIFT
#undef BASE
#undef MASK
+ /* Compile-time sanity check that these are indeed equal. Github issue #2670. */
+ #ifdef SIZEOF_VOID_P
+ enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) };
+ #endif
+#endif
+
+#ifndef __has_attribute
+ #define __has_attribute(x) 0
+#endif
+
+#ifndef __has_cpp_attribute
+ #define __has_cpp_attribute(x) 0
+#endif
+
+// restrict
+#ifndef CYTHON_RESTRICT
+ #if defined(__GNUC__)
+ #define CYTHON_RESTRICT __restrict__
+ #elif defined(_MSC_VER) && _MSC_VER >= 1400
+ #define CYTHON_RESTRICT __restrict
+ #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
+ #define CYTHON_RESTRICT restrict
+ #else
+ #define CYTHON_RESTRICT
+ #endif
+#endif
+
+// unused attribute
+#ifndef CYTHON_UNUSED
+# if defined(__GNUC__)
+# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4))
+# define CYTHON_UNUSED __attribute__ ((__unused__))
+# else
+# define CYTHON_UNUSED
+# endif
+# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER))
+# define CYTHON_UNUSED __attribute__ ((__unused__))
+# else
+# define CYTHON_UNUSED
+# endif
+#endif
+
+#ifndef CYTHON_MAYBE_UNUSED_VAR
+# if defined(__cplusplus)
+ template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { }
+# else
+# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x)
+# endif
+#endif
+
+#ifndef CYTHON_NCP_UNUSED
+# if CYTHON_COMPILING_IN_CPYTHON
+# define CYTHON_NCP_UNUSED
+# else
+# define CYTHON_NCP_UNUSED CYTHON_UNUSED
+# endif
+#endif
+
+#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None)
+
+#ifdef _MSC_VER
+ #ifndef _MSC_STDINT_H_
+ #if _MSC_VER < 1300
+ typedef unsigned char uint8_t;
+ typedef unsigned int uint32_t;
+ #else
+ typedef unsigned __int8 uint8_t;
+ typedef unsigned __int32 uint32_t;
+ #endif
+ #endif
+#else
+ #include
+#endif
+
+
+#ifndef CYTHON_FALLTHROUGH
+ #if defined(__cplusplus) && __cplusplus >= 201103L
+ #if __has_cpp_attribute(fallthrough)
+ #define CYTHON_FALLTHROUGH [[fallthrough]]
+ #elif __has_cpp_attribute(clang::fallthrough)
+ #define CYTHON_FALLTHROUGH [[clang::fallthrough]]
+ #elif __has_cpp_attribute(gnu::fallthrough)
+ #define CYTHON_FALLTHROUGH [[gnu::fallthrough]]
+ #endif
+ #endif
+
+ #ifndef CYTHON_FALLTHROUGH
+ #if __has_attribute(fallthrough)
+ #define CYTHON_FALLTHROUGH __attribute__((fallthrough))
+ #else
+ #define CYTHON_FALLTHROUGH
+ #endif
+ #endif
+
+ #if defined(__clang__ ) && defined(__apple_build_version__)
+ #if __apple_build_version__ < 7000000 /* Xcode < 7.0 */
+ #undef CYTHON_FALLTHROUGH
+ #define CYTHON_FALLTHROUGH
+ #endif
+ #endif
+#endif
+
+/////////////// CInitCode ///////////////
+
+// inline attribute
+#ifndef CYTHON_INLINE
+ #if defined(__clang__)
+ #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))
+ #elif defined(__GNUC__)
+ #define CYTHON_INLINE __inline__
+ #elif defined(_MSC_VER)
+ #define CYTHON_INLINE __inline
+ #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
+ #define CYTHON_INLINE inline
+ #else
+ #define CYTHON_INLINE
+ #endif
+#endif
+
+
+/////////////// CppInitCode ///////////////
+
+#ifndef __cplusplus
+ #error "Cython files generated with the C++ option must be compiled with a C++ compiler."
+#endif
+
+// inline attribute
+#ifndef CYTHON_INLINE
+ #if defined(__clang__)
+ #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))
+ #else
+ #define CYTHON_INLINE inline
+ #endif
#endif
+// Work around clang bug http://stackoverflow.com/questions/21847816/c-invoke-nested-template-class-destructor
+template
+void __Pyx_call_destructor(T& x) {
+ x.~T();
+}
+
+// Used for temporary variables of "reference" type.
+template
+class __Pyx_FakeReference {
+ public:
+ __Pyx_FakeReference() : ptr(NULL) { }
+ // __Pyx_FakeReference(T& ref) : ptr(&ref) { }
+ // Const version needed as Cython doesn't know about const overloads (e.g. for stl containers).
+ __Pyx_FakeReference(const T& ref) : ptr(const_cast(&ref)) { }
+ T *operator->() { return ptr; }
+ T *operator&() { return ptr; }
+ operator T&() { return *ptr; }
+ // TODO(robertwb): Delegate all operators (or auto-generate unwrapping code where needed).
+ template bool operator ==(U other) { return *ptr == other; }
+ template bool operator !=(U other) { return *ptr != other; }
+ private:
+ T *ptr;
+};
+
+
+/////////////// PythonCompatibility ///////////////
+
#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag)
#define Py_OptimizeFlag 0
#endif
@@ -189,8 +381,13 @@
#define __Pyx_DefaultClassType PyClass_Type
#else
#define __Pyx_BUILTIN_MODULE_NAME "builtins"
+#if PY_VERSION_HEX >= 0x030800A4 && PY_VERSION_HEX < 0x030800B2
+ #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) \
+ PyCode_New(a, 0, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
+#else
#define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) \
PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
+#endif
#define __Pyx_DefaultClassType PyType_Type
#endif
@@ -207,14 +404,23 @@
#define Py_TPFLAGS_HAVE_FINALIZE 0
#endif
-#if PY_VERSION_HEX < 0x030700A0 || !defined(METH_FASTCALL)
- // new in CPython 3.6, but changed in 3.7 - see https://bugs.python.org/issue29464
+#ifndef METH_STACKLESS
+ // already defined for Stackless Python (all versions) and C-Python >= 3.7
+ // value if defined: Stackless Python < 3.6: 0x80 else 0x100
+ #define METH_STACKLESS 0
+#endif
+#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL)
+ // new in CPython 3.6, but changed in 3.7 - see
+ // positional-only parameters:
+ // https://bugs.python.org/issue29464
+ // const args:
+ // https://bugs.python.org/issue32240
#ifndef METH_FASTCALL
#define METH_FASTCALL 0x80
#endif
- typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject **args, Py_ssize_t nargs);
+ typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs);
// new in CPython 3.7, used to be old signature of _PyCFunctionFast() in 3.6
- typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject **args,
+ typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args,
Py_ssize_t nargs, PyObject *kwnames);
#else
#define __Pyx_PyCFunctionFast _PyCFunctionFast
@@ -222,11 +428,98 @@
#endif
#if CYTHON_FAST_PYCCALL
#define __Pyx_PyFastCFunction_Check(func) \
- ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS)))))
+ ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS)))))
#else
#define __Pyx_PyFastCFunction_Check(func) 0
#endif
+#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc)
+ #define PyObject_Malloc(s) PyMem_Malloc(s)
+ #define PyObject_Free(p) PyMem_Free(p)
+ #define PyObject_Realloc(p) PyMem_Realloc(p)
+#endif
+
+#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1
+ #define PyMem_RawMalloc(n) PyMem_Malloc(n)
+ #define PyMem_RawRealloc(p, n) PyMem_Realloc(p, n)
+ #define PyMem_RawFree(p) PyMem_Free(p)
+#endif
+
+#if CYTHON_COMPILING_IN_PYSTON
+ // special C-API functions only in Pyston
+ #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co)
+ #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno)
+#else
+ #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0)
+ #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno)
+#endif
+
+#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000
+ #define __Pyx_PyThreadState_Current PyThreadState_GET()
+#elif PY_VERSION_HEX >= 0x03060000
+ //#elif PY_VERSION_HEX >= 0x03050200
+ // Actually added in 3.5.2, but compiling against that does not guarantee that we get imported there.
+ #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet()
+#elif PY_VERSION_HEX >= 0x03000000
+ #define __Pyx_PyThreadState_Current PyThreadState_GET()
+#else
+ #define __Pyx_PyThreadState_Current _PyThreadState_Current
+#endif
+
+// TSS (Thread Specific Storage) API
+#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT)
+#include "pythread.h"
+#define Py_tss_NEEDS_INIT 0
+typedef int Py_tss_t;
+static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) {
+ *key = PyThread_create_key();
+ return 0; /* PyThread_create_key reports success always */
+}
+static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) {
+ Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t));
+ *key = Py_tss_NEEDS_INIT;
+ return key;
+}
+static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) {
+ PyObject_Free(key);
+}
+static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) {
+ return *key != Py_tss_NEEDS_INIT;
+}
+static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) {
+ PyThread_delete_key(*key);
+ *key = Py_tss_NEEDS_INIT;
+}
+static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) {
+ return PyThread_set_key_value(*key, value);
+}
+static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) {
+ return PyThread_get_key_value(*key);
+}
+// PyThread_delete_key_value(key) is equalivalent to PyThread_set_key_value(key, NULL)
+// PyThread_ReInitTLS() is a no-op
+#endif /* TSS (Thread Specific Storage) API */
+
+#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized)
+#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n))
+#else
+#define __Pyx_PyDict_NewPresized(n) PyDict_New()
+#endif
+
+#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION
+ #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y)
+ #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y)
+#else
+ #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y)
+ #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y)
+#endif
+
+#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS
+#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash)
+#else
+#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name)
+#endif
+
/* new Py3.3 unicode type (PEP 393) */
#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND)
#define CYTHON_PEP393_ENABLED 1
@@ -278,23 +571,9 @@
#define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt)
#endif
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc)
- #define PyObject_Malloc(s) PyMem_Malloc(s)
- #define PyObject_Free(p) PyMem_Free(p)
- #define PyObject_Realloc(p) PyMem_Realloc(p)
-#endif
-
-#if CYTHON_COMPILING_IN_PYSTON
- // special C-API functions only in Pyston
- #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co)
- #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno)
-#else
- #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0)
- #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno)
-#endif
-
-#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b))
-#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b))
+// ("..." % x) must call PyNumber_Remainder() if x is a string subclass that implements "__rmod__()".
+#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b))
+#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b))
#if PY_MAJOR_VERSION >= 3
#define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b)
@@ -312,6 +591,7 @@
#define PyString_Type PyUnicode_Type
#define PyString_Check PyUnicode_Check
#define PyString_CheckExact PyUnicode_CheckExact
+ #define PyObject_Unicode PyObject_Str
#endif
#if PY_MAJOR_VERSION >= 3
@@ -326,8 +606,12 @@
#define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type)
#endif
-#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type)
-#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception)
+#if CYTHON_ASSUME_SAFE_MACROS
+ #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq)
+#else
+ // NOTE: might fail with exception => check for -1
+ #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq)
+#endif
#if PY_MAJOR_VERSION >= 3
#define PyIntObject PyLongObject
@@ -367,19 +651,11 @@
#endif
#if PY_MAJOR_VERSION >= 3
- #define __Pyx_PyMethod_New(func, self, klass) ((self) ? PyMethod_New(func, self) : PyInstanceMethod_New(func))
+ #define __Pyx_PyMethod_New(func, self, klass) ((self) ? PyMethod_New(func, self) : (Py_INCREF(func), func))
#else
#define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass)
#endif
-#ifndef __has_attribute
- #define __has_attribute(x) 0
-#endif
-
-#ifndef __has_cpp_attribute
- #define __has_cpp_attribute(x) 0
-#endif
-
// backport of PyAsyncMethods from Py3.5 to older Py3.x versions
// (mis-)using the "tp_reserved" type slot which is re-activated as "tp_as_async" in Py3.5
#if CYTHON_USE_ASYNC_SLOTS
@@ -387,152 +663,193 @@
#define __Pyx_PyAsyncMethodsStruct PyAsyncMethods
#define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async)
#else
+ #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved))
+ #endif
+#else
+ #define __Pyx_PyType_AsAsync(obj) NULL
+#endif
+#ifndef __Pyx_PyAsyncMethodsStruct
typedef struct {
unaryfunc am_await;
unaryfunc am_aiter;
unaryfunc am_anext;
} __Pyx_PyAsyncMethodsStruct;
- #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved))
- #endif
-#else
- #define __Pyx_PyType_AsAsync(obj) NULL
#endif
-// restrict
-#ifndef CYTHON_RESTRICT
- #if defined(__GNUC__)
- #define CYTHON_RESTRICT __restrict__
- #elif defined(_MSC_VER) && _MSC_VER >= 1400
- #define CYTHON_RESTRICT __restrict
- #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- #define CYTHON_RESTRICT restrict
- #else
- #define CYTHON_RESTRICT
- #endif
-#endif
-// unused attribute
-#ifndef CYTHON_UNUSED
-# if defined(__GNUC__)
-# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4))
-# define CYTHON_UNUSED __attribute__ ((__unused__))
-# else
-# define CYTHON_UNUSED
-# endif
-# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER))
-# define CYTHON_UNUSED __attribute__ ((__unused__))
-# else
-# define CYTHON_UNUSED
-# endif
-#endif
+/////////////// SmallCodeConfig.proto ///////////////
-#ifndef CYTHON_MAYBE_UNUSED_VAR
-# if defined(__cplusplus)
- template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { }
-# else
-# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x)
-# endif
+#ifndef CYTHON_SMALL_CODE
+#if defined(__clang__)
+ #define CYTHON_SMALL_CODE
+#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3))
+ #define CYTHON_SMALL_CODE __attribute__((cold))
+#else
+ #define CYTHON_SMALL_CODE
#endif
-
-#ifndef CYTHON_NCP_UNUSED
-# if CYTHON_COMPILING_IN_CPYTHON
-# define CYTHON_NCP_UNUSED
-# else
-# define CYTHON_NCP_UNUSED CYTHON_UNUSED
-# endif
#endif
-#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None)
-#ifdef _MSC_VER
- #ifndef _MSC_STDINT_H_
- #if _MSC_VER < 1300
- typedef unsigned char uint8_t;
- typedef unsigned int uint32_t;
- #else
- typedef unsigned __int8 uint8_t;
- typedef unsigned __int32 uint32_t;
- #endif
- #endif
+/////////////// PyModInitFuncType.proto ///////////////
+
+#if PY_MAJOR_VERSION < 3
+
+#ifdef CYTHON_NO_PYINIT_EXPORT
+// define this to void manually because PyMODINIT_FUNC adds __declspec(dllexport) to it's definition.
+#define __Pyx_PyMODINIT_FUNC void
#else
- #include
+#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC
#endif
+#else
-#ifndef CYTHON_FALLTHROUGH
- #ifdef __cplusplus
- #if __has_cpp_attribute(fallthrough)
- #define CYTHON_FALLTHROUGH [[fallthrough]]
- #elif __has_cpp_attribute(clang::fallthrough)
- #define CYTHON_FALLTHROUGH [[clang::fallthrough]]
- #endif
- #endif
+#ifdef CYTHON_NO_PYINIT_EXPORT
+// define this to PyObject * manually because PyMODINIT_FUNC adds __declspec(dllexport) to it's definition.
+#define __Pyx_PyMODINIT_FUNC PyObject *
+#else
+#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC
+#endif
- #ifndef CYTHON_FALLTHROUGH
- #if __has_attribute(fallthrough) || (defined(__GNUC__) && defined(__attribute__))
- #define CYTHON_FALLTHROUGH __attribute__((fallthrough))
- #else
- #define CYTHON_FALLTHROUGH
- #endif
- #endif
#endif
-/////////////// CInitCode ///////////////
-// inline attribute
-#ifndef CYTHON_INLINE
- #if defined(__clang__)
- #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))
- #elif defined(__GNUC__)
- #define CYTHON_INLINE __inline__
- #elif defined(_MSC_VER)
- #define CYTHON_INLINE __inline
- #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- #define CYTHON_INLINE inline
- #else
- #define CYTHON_INLINE
- #endif
+/////////////// FastTypeChecks.proto ///////////////
+
+#if CYTHON_COMPILING_IN_CPYTHON
+#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type)
+static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b);/*proto*/
+static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type);/*proto*/
+static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2);/*proto*/
+#else
+#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type)
+#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type)
+#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2))
#endif
+#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception)
-/////////////// CppInitCode ///////////////
+/////////////// FastTypeChecks ///////////////
+//@requires: Exceptions.c::PyThreadStateGet
+//@requires: Exceptions.c::PyErrFetchRestore
+
+#if CYTHON_COMPILING_IN_CPYTHON
+static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) {
+ while (a) {
+ a = a->tp_base;
+ if (a == b)
+ return 1;
+ }
+ return b == &PyBaseObject_Type;
+}
-#ifndef __cplusplus
- #error "Cython files generated with the C++ option must be compiled with a C++ compiler."
+static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) {
+ PyObject *mro;
+ if (a == b) return 1;
+ mro = a->tp_mro;
+ if (likely(mro)) {
+ Py_ssize_t i, n;
+ n = PyTuple_GET_SIZE(mro);
+ for (i = 0; i < n; i++) {
+ if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b)
+ return 1;
+ }
+ return 0;
+ }
+ // should only get here for incompletely initialised types, i.e. never under normal usage patterns
+ return __Pyx_InBases(a, b);
+}
+
+
+#if PY_MAJOR_VERSION == 2
+static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) {
+ // PyObject_IsSubclass() can recurse and therefore is not safe
+ PyObject *exception, *value, *tb;
+ int res;
+ __Pyx_PyThreadState_declare
+ __Pyx_PyThreadState_assign
+ __Pyx_ErrFetch(&exception, &value, &tb);
+
+ res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0;
+ // This function must not fail, so print the error here (which also clears it)
+ if (unlikely(res == -1)) {
+ PyErr_WriteUnraisable(err);
+ res = 0;
+ }
+ if (!res) {
+ res = PyObject_IsSubclass(err, exc_type2);
+ // This function must not fail, so print the error here (which also clears it)
+ if (unlikely(res == -1)) {
+ PyErr_WriteUnraisable(err);
+ res = 0;
+ }
+ }
+
+ __Pyx_ErrRestore(exception, value, tb);
+ return res;
+}
+#else
+static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) {
+ int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0;
+ if (!res) {
+ res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2);
+ }
+ return res;
+}
#endif
-// inline attribute
-#ifndef CYTHON_INLINE
- #if defined(__clang__)
- #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))
- #else
- #define CYTHON_INLINE inline
- #endif
+// so far, we only call PyErr_GivenExceptionMatches() with an exception type (not instance) as first argument
+// => optimise for that case
+
+static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) {
+ Py_ssize_t i, n;
+ assert(PyExceptionClass_Check(exc_type));
+ n = PyTuple_GET_SIZE(tuple);
+#if PY_MAJOR_VERSION >= 3
+ // the tighter subtype checking in Py3 allows faster out-of-order comparison
+ for (i=0; i
-void __Pyx_call_destructor(T& x) {
- x.~T();
+static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject* exc_type) {
+ if (likely(err == exc_type)) return 1;
+ if (likely(PyExceptionClass_Check(err))) {
+ if (likely(PyExceptionClass_Check(exc_type))) {
+ return __Pyx_inner_PyErr_GivenExceptionMatches2(err, NULL, exc_type);
+ } else if (likely(PyTuple_Check(exc_type))) {
+ return __Pyx_PyErr_GivenExceptionMatchesTuple(err, exc_type);
+ } else {
+ // FIXME: Py3: PyErr_SetString(PyExc_TypeError, "catching classes that do not inherit from BaseException is not allowed");
+ }
+ }
+ return PyErr_GivenExceptionMatches(err, exc_type);
}
-// Used for temporary variables of "reference" type.
-template
-class __Pyx_FakeReference {
- public:
- __Pyx_FakeReference() : ptr(NULL) { }
- // __Pyx_FakeReference(T& ref) : ptr(&ref) { }
- // Const version needed as Cython doesn't know about const overloads (e.g. for stl containers).
- __Pyx_FakeReference(const T& ref) : ptr(const_cast(&ref)) { }
- T *operator->() { return ptr; }
- T *operator&() { return ptr; }
- operator T&() { return *ptr; }
- // TODO(robertwb): Delegate all operators (or auto-generate unwrapping code where needed).
- template bool operator ==(U other) { return *ptr == other; }
- template bool operator !=(U other) { return *ptr != other; }
- private:
- T *ptr;
-};
+static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *exc_type1, PyObject *exc_type2) {
+ // Only used internally with known exception types => pure safety check assertions.
+ assert(PyExceptionClass_Check(exc_type1));
+ assert(PyExceptionClass_Check(exc_type2));
+ if (likely(err == exc_type1 || err == exc_type2)) return 1;
+ if (likely(PyExceptionClass_Check(err))) {
+ return __Pyx_inner_PyErr_GivenExceptionMatches2(err, exc_type1, exc_type2);
+ }
+ return (PyErr_GivenExceptionMatches(err, exc_type1) || PyErr_GivenExceptionMatches(err, exc_type2));
+}
+
+#endif
/////////////// MathInitCode ///////////////
@@ -568,6 +885,7 @@
const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; /*proto*/
/////////////// ForceInitThreads.proto ///////////////
+//@proto_block: utility_code_proto_before_types
#ifndef __PYX_FORCE_INIT_THREADS
#define __PYX_FORCE_INIT_THREADS 0
@@ -579,6 +897,86 @@
PyEval_InitThreads();
#endif
+
+/////////////// ModuleCreationPEP489 ///////////////
+//@substitute: naming
+
+//#if CYTHON_PEP489_MULTI_PHASE_INIT
+static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) {
+ #if PY_VERSION_HEX >= 0x030700A1
+ static PY_INT64_T main_interpreter_id = -1;
+ PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp);
+ if (main_interpreter_id == -1) {
+ main_interpreter_id = current_id;
+ return (unlikely(current_id == -1)) ? -1 : 0;
+ } else if (unlikely(main_interpreter_id != current_id))
+
+ #else
+ static PyInterpreterState *main_interpreter = NULL;
+ PyInterpreterState *current_interpreter = PyThreadState_Get()->interp;
+ if (!main_interpreter) {
+ main_interpreter = current_interpreter;
+ } else if (unlikely(main_interpreter != current_interpreter))
+ #endif
+
+ {
+ PyErr_SetString(
+ PyExc_ImportError,
+ "Interpreter change detected - this module can only be loaded into one interpreter per process.");
+ return -1;
+ }
+ return 0;
+}
+
+static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) {
+ PyObject *value = PyObject_GetAttrString(spec, from_name);
+ int result = 0;
+ if (likely(value)) {
+ if (allow_none || value != Py_None) {
+ result = PyDict_SetItemString(moddict, to_name, value);
+ }
+ Py_DECREF(value);
+ } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) {
+ PyErr_Clear();
+ } else {
+ result = -1;
+ }
+ return result;
+}
+
+static CYTHON_SMALL_CODE PyObject* ${pymodule_create_func_cname}(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) {
+ PyObject *module = NULL, *moddict, *modname;
+
+ // For now, we only have exactly one module instance.
+ if (__Pyx_check_single_interpreter())
+ return NULL;
+ if (${module_cname})
+ return __Pyx_NewRef(${module_cname});
+
+ modname = PyObject_GetAttrString(spec, "name");
+ if (unlikely(!modname)) goto bad;
+
+ module = PyModule_NewObject(modname);
+ Py_DECREF(modname);
+ if (unlikely(!module)) goto bad;
+
+ moddict = PyModule_GetDict(module);
+ if (unlikely(!moddict)) goto bad;
+ // moddict is a borrowed reference
+
+ if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad;
+ if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad;
+ if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad;
+ if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad;
+
+ return module;
+bad:
+ Py_XDECREF(module);
+ return NULL;
+}
+//#endif
+
+
/////////////// CodeObjectCache.proto ///////////////
typedef struct {
@@ -807,9 +1205,9 @@
static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) {
PyObject *m = NULL, *p = NULL;
void *r = NULL;
- m = PyImport_ImportModule((char *)modname);
+ m = PyImport_ImportModule(modname);
if (!m) goto end;
- p = PyObject_GetAttrString(m, (char *)"RefNannyAPI");
+ p = PyObject_GetAttrString(m, "RefNannyAPI");
if (!p) goto end;
r = PyLong_AsVoidPtr(p);
end:
@@ -819,17 +1217,35 @@
}
#endif /* CYTHON_REFNANNY */
+
+/////////////// ImportRefnannyAPI ///////////////
+
+#if CYTHON_REFNANNY
+__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny");
+if (!__Pyx_RefNanny) {
+ PyErr_Clear();
+ __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny");
+ if (!__Pyx_RefNanny)
+ Py_FatalError("failed to import 'refnanny' module");
+}
+#endif
+
+
/////////////// RegisterModuleCleanup.proto ///////////////
//@substitute: naming
static void ${cleanup_cname}(PyObject *self); /*proto*/
+
+#if PY_MAJOR_VERSION < 3 || CYTHON_COMPILING_IN_PYPY
static int __Pyx_RegisterCleanup(void); /*proto*/
+#else
+#define __Pyx_RegisterCleanup() (0)
+#endif
/////////////// RegisterModuleCleanup ///////////////
//@substitute: naming
-//@requires: ImportExport.c::ModuleImport
-#if PY_MAJOR_VERSION < 3
+#if PY_MAJOR_VERSION < 3 || CYTHON_COMPILING_IN_PYPY
static PyObject* ${cleanup_cname}_atexit(PyObject *module, CYTHON_UNUSED PyObject *unused) {
${cleanup_cname}(module);
Py_INCREF(Py_None); return Py_None;
@@ -857,7 +1273,7 @@
if (!cleanup_func)
goto bad;
- atexit = __Pyx_ImportModule("atexit");
+ atexit = PyImport_ImportModule("atexit");
if (!atexit)
goto bad;
reg = PyObject_GetAttrString(atexit, "_exithandlers");
@@ -899,12 +1315,6 @@
Py_XDECREF(res);
return ret;
}
-#else
-// fake call purely to work around "unused function" warning for __Pyx_ImportModule()
-static int __Pyx_RegisterCleanup(void) {
- if ((0)) __Pyx_ImportModule(NULL);
- return 0;
-}
#endif
/////////////// FastGil.init ///////////////
@@ -913,6 +1323,7 @@
#endif
/////////////// NoFastGil.proto ///////////////
+//@proto_block: utility_code_proto_before_types
#define __Pyx_PyGILState_Ensure PyGILState_Ensure
#define __Pyx_PyGILState_Release PyGILState_Release
@@ -921,6 +1332,7 @@
#define __Pyx_FastGilFuncInit()
/////////////// FastGil.proto ///////////////
+//@proto_block: utility_code_proto_before_types
struct __Pyx_FastGilVtab {
PyGILState_STATE (*Fast_PyGILState_Ensure)(void);
@@ -970,17 +1382,10 @@
#define __Pyx_FastGIL_PyCapsule \
__Pyx_FastGIL_ABI_module "." __Pyx_FastGIL_PyCapsuleName
-#if PY_VERSION_HEX >= 0x03050000
- #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet()
-#elif PY_VERSION_HEX >= 0x03000000
- #define __Pyx_PyThreadState_Current PyThreadState_Get()
-#elif PY_VERSION_HEX < 0x02070000
+#if PY_VERSION_HEX < 0x02070000
#undef CYTHON_THREAD_LOCAL
-#else
- #define __Pyx_PyThreadState_Current _PyThreadState_Current
#endif
-
#ifdef CYTHON_THREAD_LOCAL
#include "pythread.h"
@@ -1010,8 +1415,9 @@
static PyGILState_STATE __Pyx_FastGil_PyGILState_Ensure(void) {
int current;
+ PyThreadState *tcur;
__Pyx_FastGIL_Remember0();
- PyThreadState *tcur = __Pyx_FastGil_get_tcur();
+ tcur = __Pyx_FastGil_get_tcur();
if (tcur == NULL) {
// Uninitialized, need to initialize now.
return PyGILState_Ensure();
diff -Nru cython-0.26.1/Cython/Utility/ObjectHandling.c cython-0.29.14/Cython/Utility/ObjectHandling.c
--- cython-0.26.1/Cython/Utility/ObjectHandling.c 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Utility/ObjectHandling.c 2019-07-07 08:37:19.000000000 +0000
@@ -80,49 +80,69 @@
/////////////// UnpackTuple2.proto ///////////////
-static CYTHON_INLINE int __Pyx_unpack_tuple2(PyObject* tuple, PyObject** value1, PyObject** value2,
- int is_tuple, int has_known_size, int decref_tuple);
+#define __Pyx_unpack_tuple2(tuple, value1, value2, is_tuple, has_known_size, decref_tuple) \
+ (likely(is_tuple || PyTuple_Check(tuple)) ? \
+ (likely(has_known_size || PyTuple_GET_SIZE(tuple) == 2) ? \
+ __Pyx_unpack_tuple2_exact(tuple, value1, value2, decref_tuple) : \
+ (__Pyx_UnpackTupleError(tuple, 2), -1)) : \
+ __Pyx_unpack_tuple2_generic(tuple, value1, value2, has_known_size, decref_tuple))
+
+static CYTHON_INLINE int __Pyx_unpack_tuple2_exact(
+ PyObject* tuple, PyObject** value1, PyObject** value2, int decref_tuple);
+static int __Pyx_unpack_tuple2_generic(
+ PyObject* tuple, PyObject** value1, PyObject** value2, int has_known_size, int decref_tuple);
/////////////// UnpackTuple2 ///////////////
//@requires: UnpackItemEndCheck
//@requires: UnpackTupleError
//@requires: RaiseNeedMoreValuesToUnpack
-static CYTHON_INLINE int __Pyx_unpack_tuple2(PyObject* tuple, PyObject** pvalue1, PyObject** pvalue2,
- int is_tuple, int has_known_size, int decref_tuple) {
- Py_ssize_t index;
- PyObject *value1 = NULL, *value2 = NULL, *iter = NULL;
- if (!is_tuple && unlikely(!PyTuple_Check(tuple))) {
- iternextfunc iternext;
- iter = PyObject_GetIter(tuple);
- if (unlikely(!iter)) goto bad;
- if (decref_tuple) { Py_DECREF(tuple); tuple = NULL; }
- iternext = Py_TYPE(iter)->tp_iternext;
- value1 = iternext(iter); if (unlikely(!value1)) { index = 0; goto unpacking_failed; }
- value2 = iternext(iter); if (unlikely(!value2)) { index = 1; goto unpacking_failed; }
- if (!has_known_size && unlikely(__Pyx_IternextUnpackEndCheck(iternext(iter), 2))) goto bad;
- Py_DECREF(iter);
- } else {
- if (!has_known_size && unlikely(PyTuple_GET_SIZE(tuple) != 2)) {
- __Pyx_UnpackTupleError(tuple, 2);
- goto bad;
- }
+static CYTHON_INLINE int __Pyx_unpack_tuple2_exact(
+ PyObject* tuple, PyObject** pvalue1, PyObject** pvalue2, int decref_tuple) {
+ PyObject *value1 = NULL, *value2 = NULL;
#if CYTHON_COMPILING_IN_PYPY
- value1 = PySequence_ITEM(tuple, 0);
- if (unlikely(!value1)) goto bad;
- value2 = PySequence_ITEM(tuple, 1);
- if (unlikely(!value2)) goto bad;
-#else
- value1 = PyTuple_GET_ITEM(tuple, 0);
- value2 = PyTuple_GET_ITEM(tuple, 1);
- Py_INCREF(value1);
- Py_INCREF(value2);
+ value1 = PySequence_ITEM(tuple, 0); if (unlikely(!value1)) goto bad;
+ value2 = PySequence_ITEM(tuple, 1); if (unlikely(!value2)) goto bad;
+#else
+ value1 = PyTuple_GET_ITEM(tuple, 0); Py_INCREF(value1);
+ value2 = PyTuple_GET_ITEM(tuple, 1); Py_INCREF(value2);
#endif
- if (decref_tuple) { Py_DECREF(tuple); }
+ if (decref_tuple) {
+ Py_DECREF(tuple);
}
+
*pvalue1 = value1;
*pvalue2 = value2;
return 0;
+#if CYTHON_COMPILING_IN_PYPY
+bad:
+ Py_XDECREF(value1);
+ Py_XDECREF(value2);
+ if (decref_tuple) { Py_XDECREF(tuple); }
+ return -1;
+#endif
+}
+
+static int __Pyx_unpack_tuple2_generic(PyObject* tuple, PyObject** pvalue1, PyObject** pvalue2,
+ int has_known_size, int decref_tuple) {
+ Py_ssize_t index;
+ PyObject *value1 = NULL, *value2 = NULL, *iter = NULL;
+ iternextfunc iternext;
+
+ iter = PyObject_GetIter(tuple);
+ if (unlikely(!iter)) goto bad;
+ if (decref_tuple) { Py_DECREF(tuple); tuple = NULL; }
+
+ iternext = Py_TYPE(iter)->tp_iternext;
+ value1 = iternext(iter); if (unlikely(!value1)) { index = 0; goto unpacking_failed; }
+ value2 = iternext(iter); if (unlikely(!value2)) { index = 1; goto unpacking_failed; }
+ if (!has_known_size && unlikely(__Pyx_IternextUnpackEndCheck(iternext(iter), 2))) goto bad;
+
+ Py_DECREF(iter);
+ *pvalue1 = value1;
+ *pvalue2 = value2;
+ return 0;
+
unpacking_failed:
if (!has_known_size && __Pyx_IterFinish() == 0)
__Pyx_RaiseNeedMoreValuesError(index);
@@ -134,49 +154,77 @@
return -1;
}
+
/////////////// IterNext.proto ///////////////
#define __Pyx_PyIter_Next(obj) __Pyx_PyIter_Next2(obj, NULL)
static CYTHON_INLINE PyObject *__Pyx_PyIter_Next2(PyObject *, PyObject *); /*proto*/
/////////////// IterNext ///////////////
+//@requires: Exceptions.c::PyThreadStateGet
+//@requires: Exceptions.c::PyErrFetchRestore
+
+static PyObject *__Pyx_PyIter_Next2Default(PyObject* defval) {
+ PyObject* exc_type;
+ __Pyx_PyThreadState_declare
+ __Pyx_PyThreadState_assign
+ exc_type = __Pyx_PyErr_Occurred();
+ if (unlikely(exc_type)) {
+ if (!defval || unlikely(!__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration)))
+ return NULL;
+ __Pyx_PyErr_Clear();
+ Py_INCREF(defval);
+ return defval;
+ }
+ if (defval) {
+ Py_INCREF(defval);
+ return defval;
+ }
+ __Pyx_PyErr_SetNone(PyExc_StopIteration);
+ return NULL;
+}
+
+static void __Pyx_PyIter_Next_ErrorNoIterator(PyObject *iterator) {
+ PyErr_Format(PyExc_TypeError,
+ "%.200s object is not an iterator", Py_TYPE(iterator)->tp_name);
+}
// originally copied from Py3's builtin_next()
static CYTHON_INLINE PyObject *__Pyx_PyIter_Next2(PyObject* iterator, PyObject* defval) {
PyObject* next;
+ // We always do a quick slot check because calling PyIter_Check() is so wasteful.
iternextfunc iternext = Py_TYPE(iterator)->tp_iternext;
+ if (likely(iternext)) {
#if CYTHON_USE_TYPE_SLOTS
- if (unlikely(!iternext)) {
+ next = iternext(iterator);
+ if (likely(next))
+ return next;
+ #if PY_VERSION_HEX >= 0x02070000
+ if (unlikely(iternext == &_PyObject_NextNotImplemented))
+ return NULL;
+ #endif
#else
- if (unlikely(!iternext) || unlikely(!PyIter_Check(iterator))) {
-#endif
- PyErr_Format(PyExc_TypeError,
- "%.200s object is not an iterator", Py_TYPE(iterator)->tp_name);
+ // Since the slot was set, assume that PyIter_Next() will likely succeed, and properly fail otherwise.
+ // Note: PyIter_Next() crashes in CPython if "tp_iternext" is NULL.
+ next = PyIter_Next(iterator);
+ if (likely(next))
+ return next;
+#endif
+ } else if (CYTHON_USE_TYPE_SLOTS || unlikely(!PyIter_Check(iterator))) {
+ // If CYTHON_USE_TYPE_SLOTS, then the slot was not set and we don't have an iterable.
+ // Otherwise, don't trust "tp_iternext" and rely on PyIter_Check().
+ __Pyx_PyIter_Next_ErrorNoIterator(iterator);
return NULL;
}
- next = iternext(iterator);
- if (likely(next))
- return next;
-#if CYTHON_USE_TYPE_SLOTS
-#if PY_VERSION_HEX >= 0x02070000
- if (unlikely(iternext == &_PyObject_NextNotImplemented))
- return NULL;
-#endif
-#endif
- if (defval) {
- PyObject* exc_type = PyErr_Occurred();
- if (exc_type) {
- if (unlikely(exc_type != PyExc_StopIteration) &&
- !PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))
- return NULL;
- PyErr_Clear();
- }
- Py_INCREF(defval);
- return defval;
+#if !CYTHON_USE_TYPE_SLOTS
+ else {
+ // We have an iterator with an empty "tp_iternext", but didn't call next() on it yet.
+ next = PyIter_Next(iterator);
+ if (likely(next))
+ return next;
}
- if (!PyErr_Occurred())
- PyErr_SetNone(PyExc_StopIteration);
- return NULL;
+#endif
+ return __Pyx_PyIter_Next2Default(defval);
}
/////////////// IterFinish.proto ///////////////
@@ -191,10 +239,10 @@
static CYTHON_INLINE int __Pyx_IterFinish(void) {
#if CYTHON_FAST_THREAD_STATE
- PyThreadState *tstate = PyThreadState_GET();
+ PyThreadState *tstate = __Pyx_PyThreadState_Current;
PyObject* exc_type = tstate->curexc_type;
if (unlikely(exc_type)) {
- if (likely(exc_type == PyExc_StopIteration) || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration)) {
+ if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) {
PyObject *exc_value, *exc_tb;
exc_value = tstate->curexc_value;
exc_tb = tstate->curexc_traceback;
@@ -223,26 +271,90 @@
#endif
}
+
+/////////////// ObjectGetItem.proto ///////////////
+
+#if CYTHON_USE_TYPE_SLOTS
+static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key);/*proto*/
+#else
+#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key)
+#endif
+
+/////////////// ObjectGetItem ///////////////
+// //@requires: GetItemInt - added in IndexNode as it uses templating.
+
+#if CYTHON_USE_TYPE_SLOTS
+static PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject* index) {
+ PyObject *runerr;
+ Py_ssize_t key_value;
+ PySequenceMethods *m = Py_TYPE(obj)->tp_as_sequence;
+ if (unlikely(!(m && m->sq_item))) {
+ PyErr_Format(PyExc_TypeError, "'%.200s' object is not subscriptable", Py_TYPE(obj)->tp_name);
+ return NULL;
+ }
+
+ key_value = __Pyx_PyIndex_AsSsize_t(index);
+ if (likely(key_value != -1 || !(runerr = PyErr_Occurred()))) {
+ return __Pyx_GetItemInt_Fast(obj, key_value, 0, 1, 1);
+ }
+
+ // Error handling code -- only manage OverflowError differently.
+ if (PyErr_GivenExceptionMatches(runerr, PyExc_OverflowError)) {
+ PyErr_Clear();
+ PyErr_Format(PyExc_IndexError, "cannot fit '%.200s' into an index-sized integer", Py_TYPE(index)->tp_name);
+ }
+ return NULL;
+}
+
+static PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key) {
+ PyMappingMethods *m = Py_TYPE(obj)->tp_as_mapping;
+ if (likely(m && m->mp_subscript)) {
+ return m->mp_subscript(obj, key);
+ }
+ return __Pyx_PyObject_GetIndex(obj, key);
+}
+#endif
+
+
/////////////// DictGetItem.proto ///////////////
#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY
+static PyObject *__Pyx_PyDict_GetItem(PyObject *d, PyObject* key);/*proto*/
+
+#define __Pyx_PyObject_Dict_GetItem(obj, name) \
+ (likely(PyDict_CheckExact(obj)) ? \
+ __Pyx_PyDict_GetItem(obj, name) : PyObject_GetItem(obj, name))
+
+#else
+#define __Pyx_PyDict_GetItem(d, key) PyObject_GetItem(d, key)
+#define __Pyx_PyObject_Dict_GetItem(obj, name) PyObject_GetItem(obj, name)
+#endif
+
+/////////////// DictGetItem ///////////////
+
+#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY
static PyObject *__Pyx_PyDict_GetItem(PyObject *d, PyObject* key) {
PyObject *value;
value = PyDict_GetItemWithError(d, key);
if (unlikely(!value)) {
if (!PyErr_Occurred()) {
- PyObject* args = PyTuple_Pack(1, key);
- if (likely(args))
- PyErr_SetObject(PyExc_KeyError, args);
- Py_XDECREF(args);
+ if (unlikely(PyTuple_Check(key))) {
+ // CPython interprets tuples as separate arguments => must wrap them in another tuple.
+ PyObject* args = PyTuple_Pack(1, key);
+ if (likely(args)) {
+ PyErr_SetObject(PyExc_KeyError, args);
+ Py_DECREF(args);
+ }
+ } else {
+ // Avoid tuple packing if possible.
+ PyErr_SetObject(PyExc_KeyError, key);
+ }
}
return NULL;
}
Py_INCREF(value);
return value;
}
-#else
- #define __Pyx_PyDict_GetItem(d, key) PyObject_GetItem(d, key)
#endif
/////////////// GetItemInt.proto ///////////////
@@ -263,13 +375,13 @@
int wraparound, int boundscheck);
{{endfor}}
-static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j);
+static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j);
static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i,
int is_list, int wraparound, int boundscheck);
/////////////// GetItemInt ///////////////
-static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) {
+static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) {
PyObject *r;
if (!j) return NULL;
r = PyObject_GetItem(o, j);
@@ -286,7 +398,7 @@
if (wraparound & unlikely(i < 0)) {
wrapped_i += Py{{type}}_GET_SIZE(o);
}
- if ((!boundscheck) || likely((0 <= wrapped_i) & (wrapped_i < Py{{type}}_GET_SIZE(o)))) {
+ if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, Py{{type}}_GET_SIZE(o)))) {
PyObject *r = Py{{type}}_GET_ITEM(o, wrapped_i);
Py_INCREF(r);
return r;
@@ -304,7 +416,7 @@
#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS
if (is_list || PyList_CheckExact(o)) {
Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o);
- if ((!boundscheck) || (likely((n >= 0) & (n < PyList_GET_SIZE(o))))) {
+ if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) {
PyObject *r = PyList_GET_ITEM(o, n);
Py_INCREF(r);
return r;
@@ -312,7 +424,7 @@
}
else if (PyTuple_CheckExact(o)) {
Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o);
- if ((!boundscheck) || likely((n >= 0) & (n < PyTuple_GET_SIZE(o)))) {
+ if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) {
PyObject *r = PyTuple_GET_ITEM(o, n);
Py_INCREF(r);
return r;
@@ -351,13 +463,13 @@
(is_list ? (PyErr_SetString(PyExc_IndexError, "list assignment index out of range"), -1) : \
__Pyx_SetItemInt_Generic(o, to_py_func(i), v)))
-static CYTHON_INLINE int __Pyx_SetItemInt_Generic(PyObject *o, PyObject *j, PyObject *v);
+static int __Pyx_SetItemInt_Generic(PyObject *o, PyObject *j, PyObject *v);
static CYTHON_INLINE int __Pyx_SetItemInt_Fast(PyObject *o, Py_ssize_t i, PyObject *v,
int is_list, int wraparound, int boundscheck);
/////////////// SetItemInt ///////////////
-static CYTHON_INLINE int __Pyx_SetItemInt_Generic(PyObject *o, PyObject *j, PyObject *v) {
+static int __Pyx_SetItemInt_Generic(PyObject *o, PyObject *j, PyObject *v) {
int r;
if (!j) return -1;
r = PyObject_SetItem(o, j, v);
@@ -370,7 +482,7 @@
#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS
if (is_list || PyList_CheckExact(o)) {
Py_ssize_t n = (!wraparound) ? i : ((likely(i >= 0)) ? i : i + PyList_GET_SIZE(o));
- if ((!boundscheck) || likely((n >= 0) & (n < PyList_GET_SIZE(o)))) {
+ if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o)))) {
PyObject* old = PyList_GET_ITEM(o, n);
Py_INCREF(v);
PyList_SET_ITEM(o, n, v);
@@ -397,10 +509,11 @@
}
#else
#if CYTHON_COMPILING_IN_PYPY
- if (is_list || (PySequence_Check(o) && !PyDict_Check(o))) {
+ if (is_list || (PySequence_Check(o) && !PyDict_Check(o)))
#else
- if (is_list || PySequence_Check(o)) {
+ if (is_list || PySequence_Check(o))
#endif
+ {
return PySequence_SetItem(o, i, v);
}
#endif
@@ -416,13 +529,13 @@
(is_list ? (PyErr_SetString(PyExc_IndexError, "list assignment index out of range"), -1) : \
__Pyx_DelItem_Generic(o, to_py_func(i))))
-static CYTHON_INLINE int __Pyx_DelItem_Generic(PyObject *o, PyObject *j);
+static int __Pyx_DelItem_Generic(PyObject *o, PyObject *j);
static CYTHON_INLINE int __Pyx_DelItemInt_Fast(PyObject *o, Py_ssize_t i,
int is_list, int wraparound);
/////////////// DelItemInt ///////////////
-static CYTHON_INLINE int __Pyx_DelItem_Generic(PyObject *o, PyObject *j) {
+static int __Pyx_DelItem_Generic(PyObject *o, PyObject *j) {
int r;
if (!j) return -1;
r = PyObject_DelItem(o, j);
@@ -769,7 +882,7 @@
//@requires: CalculateMetaclass
static PyObject *__Pyx_Py3MetaclassGet(PyObject *bases, PyObject *mkw) {
- PyObject *metaclass = mkw ? PyDict_GetItem(mkw, PYIDENT("metaclass")) : NULL;
+ PyObject *metaclass = mkw ? __Pyx_PyDict_GetItemStr(mkw, PYIDENT("metaclass")) : NULL;
if (metaclass) {
Py_INCREF(metaclass);
if (PyDict_DelItem(mkw, PYIDENT("metaclass")) < 0) {
@@ -806,7 +919,7 @@
return NULL;
/* Python2 __metaclass__ */
- metaclass = PyDict_GetItem(dict, PYIDENT("__metaclass__"));
+ metaclass = __Pyx_PyDict_GetItemStr(dict, PYIDENT("__metaclass__"));
if (metaclass) {
Py_INCREF(metaclass);
if (PyType_Check(metaclass)) {
@@ -917,7 +1030,7 @@
PyErr_SetString(PyExc_SystemError, "Missing type object");
return 0;
}
- if (likely(PyObject_TypeCheck(obj, type)))
+ if (likely(__Pyx_TypeCheck(obj, type)))
return 1;
PyErr_Format(PyExc_TypeError, "Cannot convert %.200s to %.200s",
Py_TYPE(obj)->tp_name, type->tp_name);
@@ -939,6 +1052,37 @@
return unlikely(result < 0) ? result : (result == (eq == Py_EQ));
}
+/////////////// PySetContains.proto ///////////////
+
+static CYTHON_INLINE int __Pyx_PySet_ContainsTF(PyObject* key, PyObject* set, int eq); /* proto */
+
+/////////////// PySetContains ///////////////
+//@requires: Builtins.c::pyfrozenset_new
+
+static int __Pyx_PySet_ContainsUnhashable(PyObject *set, PyObject *key) {
+ int result = -1;
+ if (PySet_Check(key) && PyErr_ExceptionMatches(PyExc_TypeError)) {
+ /* Convert key to frozenset */
+ PyObject *tmpkey;
+ PyErr_Clear();
+ tmpkey = __Pyx_PyFrozenSet_New(key);
+ if (tmpkey != NULL) {
+ result = PySet_Contains(set, tmpkey);
+ Py_DECREF(tmpkey);
+ }
+ }
+ return result;
+}
+
+static CYTHON_INLINE int __Pyx_PySet_ContainsTF(PyObject* key, PyObject* set, int eq) {
+ int result = PySet_Contains(set, key);
+
+ if (unlikely(result < 0)) {
+ result = __Pyx_PySet_ContainsUnhashable(set, key);
+ }
+ return unlikely(result < 0) ? result : (result == (eq == Py_EQ));
+}
+
/////////////// PySequenceContains.proto ///////////////
static CYTHON_INLINE int __Pyx_PySequence_ContainsTF(PyObject* item, PyObject* seq, int eq) {
@@ -975,43 +1119,113 @@
/////////////// GetNameInClass.proto ///////////////
-static PyObject *__Pyx_GetNameInClass(PyObject *nmspace, PyObject *name); /*proto*/
+#define __Pyx_GetNameInClass(var, nmspace, name) (var) = __Pyx__GetNameInClass(nmspace, name)
+static PyObject *__Pyx__GetNameInClass(PyObject *nmspace, PyObject *name); /*proto*/
/////////////// GetNameInClass ///////////////
//@requires: PyObjectGetAttrStr
//@requires: GetModuleGlobalName
+//@requires: Exceptions.c::PyThreadStateGet
+//@requires: Exceptions.c::PyErrFetchRestore
+//@requires: Exceptions.c::PyErrExceptionMatches
-static PyObject *__Pyx_GetNameInClass(PyObject *nmspace, PyObject *name) {
+static PyObject *__Pyx_GetGlobalNameAfterAttributeLookup(PyObject *name) {
+ PyObject *result;
+ __Pyx_PyThreadState_declare
+ __Pyx_PyThreadState_assign
+ if (unlikely(!__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError)))
+ return NULL;
+ __Pyx_PyErr_Clear();
+ __Pyx_GetModuleGlobalNameUncached(result, name);
+ return result;
+}
+
+static PyObject *__Pyx__GetNameInClass(PyObject *nmspace, PyObject *name) {
PyObject *result;
result = __Pyx_PyObject_GetAttrStr(nmspace, name);
- if (!result)
- result = __Pyx_GetModuleGlobalName(name);
+ if (!result) {
+ result = __Pyx_GetGlobalNameAfterAttributeLookup(name);
+ }
return result;
}
+
+/////////////// SetNameInClass.proto ///////////////
+
+#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1
+// Identifier names are always interned and have a pre-calculated hash value.
+#define __Pyx_SetNameInClass(ns, name, value) \
+ (likely(PyDict_CheckExact(ns)) ? _PyDict_SetItem_KnownHash(ns, name, value, ((PyASCIIObject *) name)->hash) : PyObject_SetItem(ns, name, value))
+#elif CYTHON_COMPILING_IN_CPYTHON
+#define __Pyx_SetNameInClass(ns, name, value) \
+ (likely(PyDict_CheckExact(ns)) ? PyDict_SetItem(ns, name, value) : PyObject_SetItem(ns, name, value))
+#else
+#define __Pyx_SetNameInClass(ns, name, value) PyObject_SetItem(ns, name, value)
+#endif
+
+
/////////////// GetModuleGlobalName.proto ///////////////
+//@requires: PyDictVersioning
+//@substitute: naming
+
+#if CYTHON_USE_DICT_VERSIONS
+#define __Pyx_GetModuleGlobalName(var, name) { \
+ static PY_UINT64_T __pyx_dict_version = 0; \
+ static PyObject *__pyx_dict_cached_value = NULL; \
+ (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION($moddict_cname))) ? \
+ (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) : \
+ __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value); \
+}
+#define __Pyx_GetModuleGlobalNameUncached(var, name) { \
+ PY_UINT64_T __pyx_dict_version; \
+ PyObject *__pyx_dict_cached_value; \
+ (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value); \
+}
+static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); /*proto*/
+#else
+#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name)
+#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name)
+static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); /*proto*/
+#endif
-static CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name); /*proto*/
/////////////// GetModuleGlobalName ///////////////
//@requires: GetBuiltinName
//@substitute: naming
-static CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name) {
+#if CYTHON_USE_DICT_VERSIONS
+static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value)
+#else
+static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name)
+#endif
+{
PyObject *result;
#if !CYTHON_AVOID_BORROWED_REFS
+#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1
+ // Identifier names are always interned and have a pre-calculated hash value.
+ result = _PyDict_GetItem_KnownHash($moddict_cname, name, ((PyASCIIObject *) name)->hash);
+ __PYX_UPDATE_DICT_CACHE($moddict_cname, result, *dict_cached_value, *dict_version)
+ if (likely(result)) {
+ return __Pyx_NewRef(result);
+ } else if (unlikely(PyErr_Occurred())) {
+ return NULL;
+ }
+#else
result = PyDict_GetItem($moddict_cname, name);
+ __PYX_UPDATE_DICT_CACHE($moddict_cname, result, *dict_cached_value, *dict_version)
if (likely(result)) {
- Py_INCREF(result);
- } else {
+ return __Pyx_NewRef(result);
+ }
+#endif
#else
result = PyObject_GetItem($moddict_cname, name);
- if (!result) {
- PyErr_Clear();
-#endif
- result = __Pyx_GetBuiltinName(name);
+ __PYX_UPDATE_DICT_CACHE($moddict_cname, result, *dict_cached_value, *dict_version)
+ if (likely(result)) {
+ return __Pyx_NewRef(result);
}
- return result;
+ PyErr_Clear();
+#endif
+ return __Pyx_GetBuiltinName(name);
}
//////////////////// GetAttr.proto ////////////////////
@@ -1022,7 +1236,7 @@
//@requires: PyObjectGetAttrStr
static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *o, PyObject *n) {
-#if CYTHON_COMPILING_IN_CPYTHON
+#if CYTHON_USE_TYPE_SLOTS
#if PY_MAJOR_VERSION >= 3
if (likely(PyUnicode_Check(n)))
#else
@@ -1062,9 +1276,102 @@
#define __Pyx_PyObject_LookupSpecial(o,n) __Pyx_PyObject_GetAttrStr(o,n)
#endif
+
+/////////////// PyObject_GenericGetAttrNoDict.proto ///////////////
+
+// Setting "tp_getattro" to anything but "PyObject_GenericGetAttr" disables fast method calls in Py3.7.
+#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000
+static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name);
+#else
+// No-args macro to allow function pointer assignment.
+#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr
+#endif
+
+/////////////// PyObject_GenericGetAttrNoDict ///////////////
+
+#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000
+
+static PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, PyObject *attr_name) {
+ PyErr_Format(PyExc_AttributeError,
+#if PY_MAJOR_VERSION >= 3
+ "'%.50s' object has no attribute '%U'",
+ tp->tp_name, attr_name);
+#else
+ "'%.50s' object has no attribute '%.400s'",
+ tp->tp_name, PyString_AS_STRING(attr_name));
+#endif
+ return NULL;
+}
+
+static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name) {
+ // Copied and adapted from _PyObject_GenericGetAttrWithDict() in CPython 2.6/3.7.
+ // To be used in the "tp_getattro" slot of extension types that have no instance dict and cannot be subclassed.
+ PyObject *descr;
+ PyTypeObject *tp = Py_TYPE(obj);
+
+ if (unlikely(!PyString_Check(attr_name))) {
+ return PyObject_GenericGetAttr(obj, attr_name);
+ }
+
+ assert(!tp->tp_dictoffset);
+ descr = _PyType_Lookup(tp, attr_name);
+ if (unlikely(!descr)) {
+ return __Pyx_RaiseGenericGetAttributeError(tp, attr_name);
+ }
+
+ Py_INCREF(descr);
+
+ #if PY_MAJOR_VERSION < 3
+ if (likely(PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_HAVE_CLASS)))
+ #endif
+ {
+ descrgetfunc f = Py_TYPE(descr)->tp_descr_get;
+ // Optimise for the non-descriptor case because it is faster.
+ if (unlikely(f)) {
+ PyObject *res = f(descr, obj, (PyObject *)tp);
+ Py_DECREF(descr);
+ return res;
+ }
+ }
+ return descr;
+}
+#endif
+
+
+/////////////// PyObject_GenericGetAttr.proto ///////////////
+
+// Setting "tp_getattro" to anything but "PyObject_GenericGetAttr" disables fast method calls in Py3.7.
+#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000
+static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name);
+#else
+// No-args macro to allow function pointer assignment.
+#define __Pyx_PyObject_GenericGetAttr PyObject_GenericGetAttr
+#endif
+
+/////////////// PyObject_GenericGetAttr ///////////////
+//@requires: PyObject_GenericGetAttrNoDict
+
+#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000
+static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name) {
+ if (unlikely(Py_TYPE(obj)->tp_dictoffset)) {
+ return PyObject_GenericGetAttr(obj, attr_name);
+ }
+ return __Pyx_PyObject_GenericGetAttrNoDict(obj, attr_name);
+}
+#endif
+
+
/////////////// PyObjectGetAttrStr.proto ///////////////
#if CYTHON_USE_TYPE_SLOTS
+static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name);/*proto*/
+#else
+#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n)
+#endif
+
+/////////////// PyObjectGetAttrStr ///////////////
+
+#if CYTHON_USE_TYPE_SLOTS
static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) {
PyTypeObject* tp = Py_TYPE(obj);
if (likely(tp->tp_getattro))
@@ -1075,14 +1382,22 @@
#endif
return PyObject_GetAttr(obj, attr_name);
}
-#else
-#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n)
#endif
+
/////////////// PyObjectSetAttrStr.proto ///////////////
#if CYTHON_USE_TYPE_SLOTS
-#define __Pyx_PyObject_DelAttrStr(o,n) __Pyx_PyObject_SetAttrStr(o,n,NULL)
+#define __Pyx_PyObject_DelAttrStr(o,n) __Pyx_PyObject_SetAttrStr(o, n, NULL)
+static CYTHON_INLINE int __Pyx_PyObject_SetAttrStr(PyObject* obj, PyObject* attr_name, PyObject* value);/*proto*/
+#else
+#define __Pyx_PyObject_DelAttrStr(o,n) PyObject_DelAttr(o,n)
+#define __Pyx_PyObject_SetAttrStr(o,n,v) PyObject_SetAttr(o,n,v)
+#endif
+
+/////////////// PyObjectSetAttrStr ///////////////
+
+#if CYTHON_USE_TYPE_SLOTS
static CYTHON_INLINE int __Pyx_PyObject_SetAttrStr(PyObject* obj, PyObject* attr_name, PyObject* value) {
PyTypeObject* tp = Py_TYPE(obj);
if (likely(tp->tp_setattro))
@@ -1093,11 +1408,126 @@
#endif
return PyObject_SetAttr(obj, attr_name, value);
}
+#endif
+
+
+/////////////// PyObjectGetMethod.proto ///////////////
+
+static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method);/*proto*/
+
+/////////////// PyObjectGetMethod ///////////////
+//@requires: PyObjectGetAttrStr
+
+static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method) {
+ PyObject *attr;
+#if CYTHON_UNPACK_METHODS && CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_PYTYPE_LOOKUP
+ // Copied from _PyObject_GetMethod() in CPython 3.7
+ PyTypeObject *tp = Py_TYPE(obj);
+ PyObject *descr;
+ descrgetfunc f = NULL;
+ PyObject **dictptr, *dict;
+ int meth_found = 0;
+
+ assert (*method == NULL);
+
+ if (unlikely(tp->tp_getattro != PyObject_GenericGetAttr)) {
+ attr = __Pyx_PyObject_GetAttrStr(obj, name);
+ goto try_unpack;
+ }
+ if (unlikely(tp->tp_dict == NULL) && unlikely(PyType_Ready(tp) < 0)) {
+ return 0;
+ }
+
+ descr = _PyType_Lookup(tp, name);
+ if (likely(descr != NULL)) {
+ Py_INCREF(descr);
+ // Repeating the condition below accommodates for MSVC's inability to test macros inside of macro expansions.
+#if PY_MAJOR_VERSION >= 3
+ #ifdef __Pyx_CyFunction_USED
+ if (likely(PyFunction_Check(descr) || (Py_TYPE(descr) == &PyMethodDescr_Type) || __Pyx_CyFunction_Check(descr)))
+ #else
+ if (likely(PyFunction_Check(descr) || (Py_TYPE(descr) == &PyMethodDescr_Type)))
+ #endif
#else
-#define __Pyx_PyObject_DelAttrStr(o,n) PyObject_DelAttr(o,n)
-#define __Pyx_PyObject_SetAttrStr(o,n,v) PyObject_SetAttr(o,n,v)
+ // "PyMethodDescr_Type" is not part of the C-API in Py2.
+ #ifdef __Pyx_CyFunction_USED
+ if (likely(PyFunction_Check(descr) || __Pyx_CyFunction_Check(descr)))
+ #else
+ if (likely(PyFunction_Check(descr)))
+ #endif
+#endif
+ {
+ meth_found = 1;
+ } else {
+ f = Py_TYPE(descr)->tp_descr_get;
+ if (f != NULL && PyDescr_IsData(descr)) {
+ attr = f(descr, obj, (PyObject *)Py_TYPE(obj));
+ Py_DECREF(descr);
+ goto try_unpack;
+ }
+ }
+ }
+
+ dictptr = _PyObject_GetDictPtr(obj);
+ if (dictptr != NULL && (dict = *dictptr) != NULL) {
+ Py_INCREF(dict);
+ attr = __Pyx_PyDict_GetItemStr(dict, name);
+ if (attr != NULL) {
+ Py_INCREF(attr);
+ Py_DECREF(dict);
+ Py_XDECREF(descr);
+ goto try_unpack;
+ }
+ Py_DECREF(dict);
+ }
+
+ if (meth_found) {
+ *method = descr;
+ return 1;
+ }
+
+ if (f != NULL) {
+ attr = f(descr, obj, (PyObject *)Py_TYPE(obj));
+ Py_DECREF(descr);
+ goto try_unpack;
+ }
+
+ if (descr != NULL) {
+ *method = descr;
+ return 0;
+ }
+
+ PyErr_Format(PyExc_AttributeError,
+#if PY_MAJOR_VERSION >= 3
+ "'%.50s' object has no attribute '%U'",
+ tp->tp_name, name);
+#else
+ "'%.50s' object has no attribute '%.400s'",
+ tp->tp_name, PyString_AS_STRING(name));
+#endif
+ return 0;
+
+// Generic fallback implementation using normal attribute lookup.
+#else
+ attr = __Pyx_PyObject_GetAttrStr(obj, name);
+ goto try_unpack;
#endif
+try_unpack:
+#if CYTHON_UNPACK_METHODS
+ // Even if we failed to avoid creating a bound method object, it's still worth unpacking it now, if possible.
+ if (likely(attr) && PyMethod_Check(attr) && likely(PyMethod_GET_SELF(attr) == obj)) {
+ PyObject *function = PyMethod_GET_FUNCTION(attr);
+ Py_INCREF(function);
+ Py_DECREF(attr);
+ *method = function;
+ return 1;
+ }
+#endif
+ *method = attr;
+ return 0;
+}
+
/////////////// UnpackUnboundCMethod.proto ///////////////
@@ -1123,12 +1553,12 @@
#if CYTHON_COMPILING_IN_CPYTHON
#if PY_MAJOR_VERSION >= 3
// method dscriptor type isn't exported in Py2.x, cannot easily check the type there
- if (likely(PyObject_TypeCheck(method, &PyMethodDescr_Type)))
+ if (likely(__Pyx_TypeCheck(method, &PyMethodDescr_Type)))
#endif
{
PyMethodDescrObject *descr = (PyMethodDescrObject*) method;
target->func = descr->d_method->ml_meth;
- target->flag = descr->d_method->ml_flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST);
+ target->flag = descr->d_method->ml_flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_STACKLESS);
}
#endif
return 0;
@@ -1140,18 +1570,19 @@
static PyObject* __Pyx__CallUnboundCMethod0(__Pyx_CachedCFunction* cfunc, PyObject* self); /*proto*/
#if CYTHON_COMPILING_IN_CPYTHON
+// FASTCALL methods receive "&empty_tuple" as simple "PyObject[0]*"
#define __Pyx_CallUnboundCMethod0(cfunc, self) \
- ((likely((cfunc)->func)) ? \
+ (likely((cfunc)->func) ? \
(likely((cfunc)->flag == METH_NOARGS) ? (*((cfunc)->func))(self, NULL) : \
- (likely((cfunc)->flag == (METH_VARARGS | METH_KEYWORDS)) ? ((*(PyCFunctionWithKeywords)(cfunc)->func)(self, $empty_tuple, NULL)) : \
- ((cfunc)->flag == METH_VARARGS ? (*((cfunc)->func))(self, $empty_tuple) : \
- (PY_VERSION_HEX >= 0x030600B1 && (cfunc)->flag == METH_FASTCALL ? \
- (PY_VERSION_HEX >= 0x030700A0 ? \
- (*(__Pyx_PyCFunctionFast)(cfunc)->func)(self, &PyTuple_GET_ITEM($empty_tuple, 0), 0) : \
- (*(__Pyx_PyCFunctionFastWithKeywords)(cfunc)->func)(self, &PyTuple_GET_ITEM($empty_tuple, 0), 0, NULL)) : \
- (PY_VERSION_HEX >= 0x030700A0 && (cfunc)->flag == (METH_FASTCALL | METH_KEYWORDS) ? \
- (*(__Pyx_PyCFunctionFastWithKeywords)(cfunc)->func)(self, &PyTuple_GET_ITEM($empty_tuple, 0), 0, NULL) : \
- __Pyx__CallUnboundCMethod0(cfunc, self)))))) : \
+ (PY_VERSION_HEX >= 0x030600B1 && likely((cfunc)->flag == METH_FASTCALL) ? \
+ (PY_VERSION_HEX >= 0x030700A0 ? \
+ (*(__Pyx_PyCFunctionFast)(void*)(PyCFunction)(cfunc)->func)(self, &$empty_tuple, 0) : \
+ (*(__Pyx_PyCFunctionFastWithKeywords)(void*)(PyCFunction)(cfunc)->func)(self, &$empty_tuple, 0, NULL)) : \
+ (PY_VERSION_HEX >= 0x030700A0 && (cfunc)->flag == (METH_FASTCALL | METH_KEYWORDS) ? \
+ (*(__Pyx_PyCFunctionFastWithKeywords)(void*)(PyCFunction)(cfunc)->func)(self, &$empty_tuple, 0, NULL) : \
+ (likely((cfunc)->flag == (METH_VARARGS | METH_KEYWORDS)) ? ((*(PyCFunctionWithKeywords)(void*)(PyCFunction)(cfunc)->func)(self, $empty_tuple, NULL)) : \
+ ((cfunc)->flag == METH_VARARGS ? (*((cfunc)->func))(self, $empty_tuple) : \
+ __Pyx__CallUnboundCMethod0(cfunc, self)))))) : \
__Pyx__CallUnboundCMethod0(cfunc, self))
#else
#define __Pyx_CallUnboundCMethod0(cfunc, self) __Pyx__CallUnboundCMethod0(cfunc, self)
@@ -1182,18 +1613,10 @@
/////////////// CallUnboundCMethod1.proto ///////////////
-static PyObject* __Pyx__CallUnboundCMethod1(__Pyx_CachedCFunction* cfunc, PyObject* self, PyObject* arg); /*proto*/
+static PyObject* __Pyx__CallUnboundCMethod1(__Pyx_CachedCFunction* cfunc, PyObject* self, PyObject* arg);/*proto*/
#if CYTHON_COMPILING_IN_CPYTHON
-#define __Pyx_CallUnboundCMethod1(cfunc, self, arg) \
- ((likely((cfunc)->func && (cfunc)->flag == METH_O)) ? (*((cfunc)->func))(self, arg) : \
- ((PY_VERSION_HEX >= 0x030600B1 && (cfunc)->func && (cfunc)->flag == METH_FASTCALL) ? \
- (PY_VERSION_HEX >= 0x030700A0 ? \
- (*(__Pyx_PyCFunctionFast)(cfunc)->func)(self, &arg, 1) : \
- (*(__Pyx_PyCFunctionFastWithKeywords)(cfunc)->func)(self, &arg, 1, NULL)) : \
- (PY_VERSION_HEX >= 0x030700A0 && (cfunc)->func && (cfunc)->flag == (METH_FASTCALL | METH_KEYWORDS) ? \
- (*(__Pyx_PyCFunctionFastWithKeywords)(cfunc)->func)(self, &arg, 1, NULL) : \
- __Pyx__CallUnboundCMethod1(cfunc, self, arg))))
+static CYTHON_INLINE PyObject* __Pyx_CallUnboundCMethod1(__Pyx_CachedCFunction* cfunc, PyObject* self, PyObject* arg);/*proto*/
#else
#define __Pyx_CallUnboundCMethod1(cfunc, self, arg) __Pyx__CallUnboundCMethod1(cfunc, self, arg)
#endif
@@ -1202,9 +1625,30 @@
//@requires: UnpackUnboundCMethod
//@requires: PyObjectCall
+#if CYTHON_COMPILING_IN_CPYTHON
+static CYTHON_INLINE PyObject* __Pyx_CallUnboundCMethod1(__Pyx_CachedCFunction* cfunc, PyObject* self, PyObject* arg) {
+ if (likely(cfunc->func)) {
+ int flag = cfunc->flag;
+ // Not using #ifdefs for PY_VERSION_HEX to avoid C compiler warnings about unused functions.
+ if (flag == METH_O) {
+ return (*(cfunc->func))(self, arg);
+ } else if (PY_VERSION_HEX >= 0x030600B1 && flag == METH_FASTCALL) {
+ if (PY_VERSION_HEX >= 0x030700A0) {
+ return (*(__Pyx_PyCFunctionFast)(void*)(PyCFunction)cfunc->func)(self, &arg, 1);
+ } else {
+ return (*(__Pyx_PyCFunctionFastWithKeywords)(void*)(PyCFunction)cfunc->func)(self, &arg, 1, NULL);
+ }
+ } else if (PY_VERSION_HEX >= 0x030700A0 && flag == (METH_FASTCALL | METH_KEYWORDS)) {
+ return (*(__Pyx_PyCFunctionFastWithKeywords)(void*)(PyCFunction)cfunc->func)(self, &arg, 1, NULL);
+ }
+ }
+ return __Pyx__CallUnboundCMethod1(cfunc, self, arg);
+}
+#endif
+
static PyObject* __Pyx__CallUnboundCMethod1(__Pyx_CachedCFunction* cfunc, PyObject* self, PyObject* arg){
PyObject *args, *result = NULL;
- if (unlikely(!cfunc->method) && unlikely(__Pyx_TryUnpackUnboundCMethod(cfunc) < 0)) return NULL;
+ if (unlikely(!cfunc->func && !cfunc->method) && unlikely(__Pyx_TryUnpackUnboundCMethod(cfunc) < 0)) return NULL;
#if CYTHON_COMPILING_IN_CPYTHON
if (cfunc->func && (cfunc->flag & METH_VARARGS)) {
args = PyTuple_New(1);
@@ -1212,7 +1656,7 @@
Py_INCREF(arg);
PyTuple_SET_ITEM(args, 0, arg);
if (cfunc->flag & METH_KEYWORDS)
- result = (*(PyCFunctionWithKeywords)cfunc->func)(self, args, NULL);
+ result = (*(PyCFunctionWithKeywords)(void*)(PyCFunction)cfunc->func)(self, args, NULL);
else
result = (*cfunc->func)(self, args);
} else {
@@ -1235,30 +1679,95 @@
}
+/////////////// CallUnboundCMethod2.proto ///////////////
+
+static PyObject* __Pyx__CallUnboundCMethod2(__Pyx_CachedCFunction* cfunc, PyObject* self, PyObject* arg1, PyObject* arg2); /*proto*/
+
+#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030600B1
+static CYTHON_INLINE PyObject *__Pyx_CallUnboundCMethod2(__Pyx_CachedCFunction *cfunc, PyObject *self, PyObject *arg1, PyObject *arg2); /*proto*/
+#else
+#define __Pyx_CallUnboundCMethod2(cfunc, self, arg1, arg2) __Pyx__CallUnboundCMethod2(cfunc, self, arg1, arg2)
+#endif
+
+/////////////// CallUnboundCMethod2 ///////////////
+//@requires: UnpackUnboundCMethod
+//@requires: PyObjectCall
+
+#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030600B1
+static CYTHON_INLINE PyObject *__Pyx_CallUnboundCMethod2(__Pyx_CachedCFunction *cfunc, PyObject *self, PyObject *arg1, PyObject *arg2) {
+ if (likely(cfunc->func)) {
+ PyObject *args[2] = {arg1, arg2};
+ if (cfunc->flag == METH_FASTCALL) {
+ #if PY_VERSION_HEX >= 0x030700A0
+ return (*(__Pyx_PyCFunctionFast)(void*)(PyCFunction)cfunc->func)(self, args, 2);
+ #else
+ return (*(__Pyx_PyCFunctionFastWithKeywords)(void*)(PyCFunction)cfunc->func)(self, args, 2, NULL);
+ #endif
+ }
+ #if PY_VERSION_HEX >= 0x030700A0
+ if (cfunc->flag == (METH_FASTCALL | METH_KEYWORDS))
+ return (*(__Pyx_PyCFunctionFastWithKeywords)(void*)(PyCFunction)cfunc->func)(self, args, 2, NULL);
+ #endif
+ }
+ return __Pyx__CallUnboundCMethod2(cfunc, self, arg1, arg2);
+}
+#endif
+
+static PyObject* __Pyx__CallUnboundCMethod2(__Pyx_CachedCFunction* cfunc, PyObject* self, PyObject* arg1, PyObject* arg2){
+ PyObject *args, *result = NULL;
+ if (unlikely(!cfunc->func && !cfunc->method) && unlikely(__Pyx_TryUnpackUnboundCMethod(cfunc) < 0)) return NULL;
+#if CYTHON_COMPILING_IN_CPYTHON
+ if (cfunc->func && (cfunc->flag & METH_VARARGS)) {
+ args = PyTuple_New(2);
+ if (unlikely(!args)) goto bad;
+ Py_INCREF(arg1);
+ PyTuple_SET_ITEM(args, 0, arg1);
+ Py_INCREF(arg2);
+ PyTuple_SET_ITEM(args, 1, arg2);
+ if (cfunc->flag & METH_KEYWORDS)
+ result = (*(PyCFunctionWithKeywords)(void*)(PyCFunction)cfunc->func)(self, args, NULL);
+ else
+ result = (*cfunc->func)(self, args);
+ } else {
+ args = PyTuple_New(3);
+ if (unlikely(!args)) goto bad;
+ Py_INCREF(self);
+ PyTuple_SET_ITEM(args, 0, self);
+ Py_INCREF(arg1);
+ PyTuple_SET_ITEM(args, 1, arg1);
+ Py_INCREF(arg2);
+ PyTuple_SET_ITEM(args, 2, arg2);
+ result = __Pyx_PyObject_Call(cfunc->method, args, NULL);
+ }
+#else
+ args = PyTuple_Pack(3, self, arg1, arg2);
+ if (unlikely(!args)) goto bad;
+ result = __Pyx_PyObject_Call(cfunc->method, args, NULL);
+#endif
+bad:
+ Py_XDECREF(args);
+ return result;
+}
+
+
/////////////// PyObjectCallMethod0.proto ///////////////
static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name); /*proto*/
/////////////// PyObjectCallMethod0 ///////////////
-//@requires: PyObjectGetAttrStr
+//@requires: PyObjectGetMethod
//@requires: PyObjectCallOneArg
//@requires: PyObjectCallNoArg
static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name) {
- PyObject *method, *result = NULL;
- method = __Pyx_PyObject_GetAttrStr(obj, method_name);
- if (unlikely(!method)) goto bad;
-#if CYTHON_UNPACK_METHODS
- if (likely(PyMethod_Check(method))) {
- PyObject *self = PyMethod_GET_SELF(method);
- if (likely(self)) {
- PyObject *function = PyMethod_GET_FUNCTION(method);
- result = __Pyx_PyObject_CallOneArg(function, self);
- Py_DECREF(method);
- return result;
- }
+ PyObject *method = NULL, *result = NULL;
+ int is_method = __Pyx_PyObject_GetMethod(obj, method_name, &method);
+ if (likely(is_method)) {
+ result = __Pyx_PyObject_CallOneArg(method, obj);
+ Py_DECREF(method);
+ return result;
}
-#endif
+ if (unlikely(!method)) goto bad;
result = __Pyx_PyObject_CallNoArg(method);
Py_DECREF(method);
bad:
@@ -1271,54 +1780,27 @@
static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name, PyObject* arg); /*proto*/
/////////////// PyObjectCallMethod1 ///////////////
-//@requires: PyObjectGetAttrStr
+//@requires: PyObjectGetMethod
//@requires: PyObjectCallOneArg
-//@requires: PyFunctionFastCall
-//@requires: PyCFunctionFastCall
+//@requires: PyObjectCall2Args
+
+static PyObject* __Pyx__PyObject_CallMethod1(PyObject* method, PyObject* arg) {
+ // Separate function to avoid excessive inlining.
+ PyObject *result = __Pyx_PyObject_CallOneArg(method, arg);
+ Py_DECREF(method);
+ return result;
+}
static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name, PyObject* arg) {
- PyObject *method, *result = NULL;
- method = __Pyx_PyObject_GetAttrStr(obj, method_name);
- if (unlikely(!method)) goto done;
-#if CYTHON_UNPACK_METHODS
- if (likely(PyMethod_Check(method))) {
- PyObject *self = PyMethod_GET_SELF(method);
- if (likely(self)) {
- PyObject *args;
- PyObject *function = PyMethod_GET_FUNCTION(method);
- #if CYTHON_FAST_PYCALL
- if (PyFunction_Check(function)) {
- PyObject *args[2] = {self, arg};
- result = __Pyx_PyFunction_FastCall(function, args, 2);
- goto done;
- }
- #endif
- #if CYTHON_FAST_PYCCALL
- if (__Pyx_PyFastCFunction_Check(function)) {
- PyObject *args[2] = {self, arg};
- result = __Pyx_PyCFunction_FastCall(function, args, 2);
- goto done;
- }
- #endif
- args = PyTuple_New(2);
- if (unlikely(!args)) goto done;
- Py_INCREF(self);
- PyTuple_SET_ITEM(args, 0, self);
- Py_INCREF(arg);
- PyTuple_SET_ITEM(args, 1, arg);
- Py_INCREF(function);
- Py_DECREF(method); method = NULL;
- result = __Pyx_PyObject_Call(function, args, NULL);
- Py_DECREF(args);
- Py_DECREF(function);
- return result;
- }
+ PyObject *method = NULL, *result;
+ int is_method = __Pyx_PyObject_GetMethod(obj, method_name, &method);
+ if (likely(is_method)) {
+ result = __Pyx_PyObject_Call2Args(method, obj, arg);
+ Py_DECREF(method);
+ return result;
}
-#endif
- result = __Pyx_PyObject_CallOneArg(method, arg);
-done:
- Py_XDECREF(method);
- return result;
+ if (unlikely(!method)) return NULL;
+ return __Pyx__PyObject_CallMethod1(method, arg);
}
@@ -1327,72 +1809,49 @@
static PyObject* __Pyx_PyObject_CallMethod2(PyObject* obj, PyObject* method_name, PyObject* arg1, PyObject* arg2); /*proto*/
/////////////// PyObjectCallMethod2 ///////////////
-//@requires: PyObjectGetAttrStr
//@requires: PyObjectCall
//@requires: PyFunctionFastCall
//@requires: PyCFunctionFastCall
+//@requires: PyObjectCall2Args
+
+static PyObject* __Pyx_PyObject_Call3Args(PyObject* function, PyObject* arg1, PyObject* arg2, PyObject* arg3) {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(function)) {
+ PyObject *args[3] = {arg1, arg2, arg3};
+ return __Pyx_PyFunction_FastCall(function, args, 3);
+ }
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(function)) {
+ PyObject *args[3] = {arg1, arg2, arg3};
+ return __Pyx_PyFunction_FastCall(function, args, 3);
+ }
+ #endif
+
+ args = PyTuple_New(3);
+ if (unlikely(!args)) goto done;
+ Py_INCREF(arg1);
+ PyTuple_SET_ITEM(args, 0, arg1);
+ Py_INCREF(arg2);
+ PyTuple_SET_ITEM(args, 1, arg2);
+ Py_INCREF(arg3);
+ PyTuple_SET_ITEM(args, 2, arg3);
+
+ result = __Pyx_PyObject_Call(function, args, NULL);
+ Py_DECREF(args);
+ return result;
+}
static PyObject* __Pyx_PyObject_CallMethod2(PyObject* obj, PyObject* method_name, PyObject* arg1, PyObject* arg2) {
- PyObject *args, *method, *result = NULL;
- method = __Pyx_PyObject_GetAttrStr(obj, method_name);
- if (unlikely(!method)) return NULL;
-#if CYTHON_UNPACK_METHODS
- if (likely(PyMethod_Check(method)) && likely(PyMethod_GET_SELF(method))) {
- PyObject *self, *function;
- self = PyMethod_GET_SELF(method);
- function = PyMethod_GET_FUNCTION(method);
- #if CYTHON_FAST_PYCALL
- if (PyFunction_Check(function)) {
- PyObject *args[3] = {self, arg1, arg2};
- result = __Pyx_PyFunction_FastCall(function, args, 3);
- goto done;
- }
- #endif
- #if CYTHON_FAST_PYCCALL
- if (__Pyx_PyFastCFunction_Check(function)) {
- PyObject *args[3] = {self, arg1, arg2};
- result = __Pyx_PyFunction_FastCall(function, args, 3);
- goto done;
- }
- #endif
- args = PyTuple_New(3);
- if (unlikely(!args)) goto done;
- Py_INCREF(self);
- PyTuple_SET_ITEM(args, 0, self);
- Py_INCREF(arg1);
- PyTuple_SET_ITEM(args, 1, arg1);
- Py_INCREF(arg2);
- PyTuple_SET_ITEM(args, 2, arg2);
- Py_INCREF(function);
+ PyObject *args, *method = NULL, *result = NULL;
+ int is_method = __Pyx_PyObject_GetMethod(obj, method_name, &method);
+ if (likely(is_method)) {
+ result = __Pyx_PyObject_Call3Args(method, obj, arg1, arg2);
Py_DECREF(method);
- method = function;
- } else
-#endif
-#if CYTHON_FAST_PYCALL
- if (PyFunction_Check(method)) {
- PyObject *args[2] = {arg1, arg2};
- result = __Pyx_PyFunction_FastCall(method, args, 2);
- goto done;
- } else
-#endif
-#if CYTHON_FAST_PYCCALL
- if (__Pyx_PyFastCFunction_Check(method)) {
- PyObject *args[2] = {arg1, arg2};
- result = __Pyx_PyCFunction_FastCall(method, args, 2);
- goto done;
- } else
-#endif
- {
- args = PyTuple_New(2);
- if (unlikely(!args)) goto done;
- Py_INCREF(arg1);
- PyTuple_SET_ITEM(args, 0, arg1);
- Py_INCREF(arg2);
- PyTuple_SET_ITEM(args, 1, arg2);
+ return result;
}
- result = __Pyx_PyObject_Call(method, args, NULL);
- Py_DECREF(args);
-done:
+ if (unlikely(!method)) return NULL;
+ result = __Pyx_PyObject_Call2Args(method, arg1, arg2);
Py_DECREF(method);
return result;
}
@@ -1474,22 +1933,55 @@
// let's assume that the non-public C-API function might still change during the 3.6 beta phase
#if 1 || PY_VERSION_HEX < 0x030600B1
-static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, int nargs, PyObject *kwargs);
+static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs);
#else
#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs)
#endif
+
+// Backport from Python 3
+// Assert a build-time dependency, as an expression.
+// Your compile will fail if the condition isn't true, or can't be evaluated
+// by the compiler. This can be used in an expression: its value is 0.
+// Example:
+// #define foo_to_char(foo) \
+// ((char *)(foo) \
+// + Py_BUILD_ASSERT_EXPR(offsetof(struct foo, string) == 0))
+//
+// Written by Rusty Russell, public domain, http://ccodearchive.net/
+#define __Pyx_BUILD_ASSERT_EXPR(cond) \
+ (sizeof(char [1 - 2*!(cond)]) - 1)
+
+#ifndef Py_MEMBER_SIZE
+// Get the size of a structure member in bytes
+#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member)
+#endif
+
+ // Initialised by module init code.
+ static size_t __pyx_pyframe_localsplus_offset = 0;
+
+ #include "frameobject.h"
+ // This is the long runtime version of
+ // #define __Pyx_PyFrame_GetLocalsplus(frame) ((frame)->f_localsplus)
+ // offsetof(PyFrameObject, f_localsplus) differs between regular C-Python and Stackless Python.
+ // Therefore the offset is computed at run time from PyFrame_type.tp_basicsize. That is feasible,
+ // because f_localsplus is the last field of PyFrameObject (checked by Py_BUILD_ASSERT_EXPR below).
+ #define __Pxy_PyFrame_Initialize_Offsets() \
+ ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)), \
+ (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus)))
+ #define __Pyx_PyFrame_GetLocalsplus(frame) \
+ (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset))
#endif
+
/////////////// PyFunctionFastCall ///////////////
// copied from CPython 3.6 ceval.c
#if CYTHON_FAST_PYCALL
-#include "frameobject.h"
static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na,
PyObject *globals) {
PyFrameObject *f;
- PyThreadState *tstate = PyThreadState_GET();
+ PyThreadState *tstate = __Pyx_PyThreadState_Current;
PyObject **fastlocals;
Py_ssize_t i;
PyObject *result;
@@ -1505,7 +1997,7 @@
return NULL;
}
- fastlocals = f->f_localsplus;
+ fastlocals = __Pyx_PyFrame_GetLocalsplus(f);
for (i = 0; i < na; i++) {
Py_INCREF(*args);
@@ -1522,7 +2014,7 @@
#if 1 || PY_VERSION_HEX < 0x030600B1
-static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, int nargs, PyObject *kwargs) {
+static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) {
PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func);
PyObject *globals = PyFunction_GET_GLOBALS(func);
PyObject *argdefs = PyFunction_GET_DEFAULTS(func);
@@ -1616,12 +2108,12 @@
//#elif PY_MAJOR_VERSION >= 3
#if PY_MAJOR_VERSION >= 3
result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL,
- args, nargs,
+ args, (int)nargs,
k, (int)nk,
d, (int)nd, kwdefs, closure);
#else
result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL,
- args, nargs,
+ args, (int)nargs,
k, (int)nk,
d, (int)nd, closure);
#endif
@@ -1653,7 +2145,7 @@
int flags = PyCFunction_GET_FLAGS(func);
assert(PyCFunction_Check(func));
- assert(METH_FASTCALL == (flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS)));
+ assert(METH_FASTCALL == (flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS)));
assert(nargs >= 0);
assert(nargs == 0 || args != NULL);
@@ -1663,14 +2155,54 @@
assert(!PyErr_Occurred());
if ((PY_VERSION_HEX < 0x030700A0) || unlikely(flags & METH_KEYWORDS)) {
- return (*((__Pyx_PyCFunctionFastWithKeywords)meth)) (self, args, nargs, NULL);
+ return (*((__Pyx_PyCFunctionFastWithKeywords)(void*)meth)) (self, args, nargs, NULL);
} else {
- return (*((__Pyx_PyCFunctionFast)meth)) (self, args, nargs);
+ return (*((__Pyx_PyCFunctionFast)(void*)meth)) (self, args, nargs);
}
}
#endif /* CYTHON_FAST_PYCCALL */
+/////////////// PyObjectCall2Args.proto ///////////////
+
+static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2); /*proto*/
+
+/////////////// PyObjectCall2Args ///////////////
+//@requires: PyObjectCall
+//@requires: PyFunctionFastCall
+//@requires: PyCFunctionFastCall
+
+static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) {
+ PyObject *args, *result = NULL;
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(function)) {
+ PyObject *args[2] = {arg1, arg2};
+ return __Pyx_PyFunction_FastCall(function, args, 2);
+ }
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(function)) {
+ PyObject *args[2] = {arg1, arg2};
+ return __Pyx_PyCFunction_FastCall(function, args, 2);
+ }
+ #endif
+
+ args = PyTuple_New(2);
+ if (unlikely(!args)) goto done;
+ Py_INCREF(arg1);
+ PyTuple_SET_ITEM(args, 0, arg1);
+ Py_INCREF(arg2);
+ PyTuple_SET_ITEM(args, 1, arg2);
+
+ Py_INCREF(function);
+ result = __Pyx_PyObject_Call(function, args, NULL);
+ Py_DECREF(args);
+ Py_DECREF(function);
+done:
+ return result;
+}
+
+
/////////////// PyObjectCallOneArg.proto ///////////////
static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); /*proto*/
@@ -1747,10 +2279,11 @@
}
#endif
#ifdef __Pyx_CyFunction_USED
- if (likely(PyCFunction_Check(func) || PyObject_TypeCheck(func, __pyx_CyFunctionType))) {
+ if (likely(PyCFunction_Check(func) || __Pyx_CyFunction_Check(func)))
#else
- if (likely(PyCFunction_Check(func))) {
+ if (likely(PyCFunction_Check(func)))
#endif
+ {
if (likely(PyCFunction_GET_FLAGS(func) & METH_NOARGS)) {
// fast and simple case that we are optimising for
return __Pyx_PyObject_CallMethO(func, NULL);
@@ -1865,3 +2398,63 @@
#undef __Pyx_TryMatrixMethod
#endif
+
+
+/////////////// PyDictVersioning.proto ///////////////
+
+#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS
+#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1)
+#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag)
+#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) \
+ (version_var) = __PYX_GET_DICT_VERSION(dict); \
+ (cache_var) = (value);
+
+#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) { \
+ static PY_UINT64_T __pyx_dict_version = 0; \
+ static PyObject *__pyx_dict_cached_value = NULL; \
+ if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) { \
+ (VAR) = __pyx_dict_cached_value; \
+ } else { \
+ (VAR) = __pyx_dict_cached_value = (LOOKUP); \
+ __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT); \
+ } \
+}
+
+static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); /*proto*/
+static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); /*proto*/
+static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); /*proto*/
+
+#else
+#define __PYX_GET_DICT_VERSION(dict) (0)
+#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)
+#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP);
+#endif
+
+/////////////// PyDictVersioning ///////////////
+
+#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS
+static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) {
+ PyObject *dict = Py_TYPE(obj)->tp_dict;
+ return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0;
+}
+
+static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) {
+ PyObject **dictptr = NULL;
+ Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset;
+ if (offset) {
+#if CYTHON_COMPILING_IN_CPYTHON
+ dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj);
+#else
+ dictptr = _PyObject_GetDictPtr(obj);
+#endif
+ }
+ return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0;
+}
+
+static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) {
+ PyObject *dict = Py_TYPE(obj)->tp_dict;
+ if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict)))
+ return 0;
+ return obj_dict_version == __Pyx_get_object_dict_version(obj);
+}
+#endif
diff -Nru cython-0.26.1/Cython/Utility/Optimize.c cython-0.29.14/Cython/Utility/Optimize.c
--- cython-0.26.1/Cython/Utility/Optimize.c 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Utility/Optimize.c 2019-06-30 06:50:51.000000000 +0000
@@ -123,13 +123,13 @@
#define __Pyx_PyObject_PopIndex(L, py_ix, ix, is_signed, type, to_py_func) ( \
(likely(PyList_CheckExact(L) && __Pyx_fits_Py_ssize_t(ix, type, is_signed))) ? \
__Pyx__PyList_PopIndex(L, py_ix, ix) : ( \
- (unlikely(py_ix == Py_None)) ? __Pyx__PyObject_PopNewIndex(L, to_py_func(ix)) : \
+ (unlikely((py_ix) == Py_None)) ? __Pyx__PyObject_PopNewIndex(L, to_py_func(ix)) : \
__Pyx__PyObject_PopIndex(L, py_ix)))
#define __Pyx_PyList_PopIndex(L, py_ix, ix, is_signed, type, to_py_func) ( \
__Pyx_fits_Py_ssize_t(ix, type, is_signed) ? \
__Pyx__PyList_PopIndex(L, py_ix, ix) : ( \
- (unlikely(py_ix == Py_None)) ? __Pyx__PyObject_PopNewIndex(L, to_py_func(ix)) : \
+ (unlikely((py_ix) == Py_None)) ? __Pyx__PyObject_PopNewIndex(L, to_py_func(ix)) : \
__Pyx__PyObject_PopIndex(L, py_ix)))
#else
@@ -138,7 +138,7 @@
__Pyx_PyObject_PopIndex(L, py_ix, ix, is_signed, type, to_py_func)
#define __Pyx_PyObject_PopIndex(L, py_ix, ix, is_signed, type, to_py_func) ( \
- (unlikely(py_ix == Py_None)) ? __Pyx__PyObject_PopNewIndex(L, to_py_func(ix)) : \
+ (unlikely((py_ix) == Py_None)) ? __Pyx__PyObject_PopNewIndex(L, to_py_func(ix)) : \
__Pyx__PyObject_PopIndex(L, py_ix))
#endif
@@ -165,7 +165,7 @@
if (cix < 0) {
cix += size;
}
- if (likely(0 <= cix && cix < size)) {
+ if (likely(__Pyx_is_valid_index(cix, size))) {
PyObject* v = PyList_GET_ITEM(L, cix);
Py_SIZE(L) -= 1;
size -= 1;
@@ -198,6 +198,8 @@
value = default_value;
}
Py_INCREF(value);
+ // avoid C compiler warning about unused utility functions
+ if ((1));
#else
if (PyString_CheckExact(key) || PyUnicode_CheckExact(key) || PyInt_CheckExact(key)) {
/* these presumably have safe hash functions */
@@ -206,13 +208,14 @@
value = default_value;
}
Py_INCREF(value);
- } else {
- if (default_value == Py_None)
- default_value = NULL;
- value = PyObject_CallMethodObjArgs(
- d, PYIDENT("get"), key, default_value, NULL);
}
#endif
+ else {
+ if (default_value == Py_None)
+ value = CALL_UNBOUND_METHOD(PyDict_Type, "get", d, key);
+ else
+ value = CALL_UNBOUND_METHOD(PyDict_Type, "get", d, key, default_value);
+ }
return value;
}
@@ -222,7 +225,6 @@
static CYTHON_INLINE PyObject *__Pyx_PyDict_SetDefault(PyObject *d, PyObject *key, PyObject *default_value, int is_safe_type); /*proto*/
/////////////// dict_setdefault ///////////////
-//@requires: ObjectHandling.c::PyObjectCallMethod2
static CYTHON_INLINE PyObject *__Pyx_PyDict_SetDefault(PyObject *d, PyObject *key, PyObject *default_value,
CYTHON_UNUSED int is_safe_type) {
@@ -259,7 +261,7 @@
#endif
#endif
} else {
- value = __Pyx_PyObject_CallMethod2(d, PYIDENT("setdefault"), key, default_value);
+ value = CALL_UNBOUND_METHOD(PyDict_Type, "setdefault", d, key, default_value);
}
return value;
}
@@ -269,6 +271,28 @@
#define __Pyx_PyDict_Clear(d) (PyDict_Clear(d), 0)
+
+/////////////// py_dict_pop.proto ///////////////
+
+static CYTHON_INLINE PyObject *__Pyx_PyDict_Pop(PyObject *d, PyObject *key, PyObject *default_value); /*proto*/
+
+/////////////// py_dict_pop ///////////////
+
+static CYTHON_INLINE PyObject *__Pyx_PyDict_Pop(PyObject *d, PyObject *key, PyObject *default_value) {
+#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX > 0x030600B3
+ if ((1)) {
+ return _PyDict_Pop(d, key, default_value);
+ } else
+ // avoid "function unused" warnings
+#endif
+ if (default_value) {
+ return CALL_UNBOUND_METHOD(PyDict_Type, "pop", d, key, default_value);
+ } else {
+ return CALL_UNBOUND_METHOD(PyDict_Type, "pop", d, key);
+ }
+}
+
+
/////////////// dict_iter.proto ///////////////
static CYTHON_INLINE PyObject* __Pyx_dict_iterator(PyObject* dict, int is_dict, PyObject* method_name,
@@ -294,18 +318,20 @@
// On PyPy3, we need to translate manually a few method names.
// This logic is not needed on CPython thanks to the fast case above.
static PyObject *py_items = NULL, *py_keys = NULL, *py_values = NULL;
- const char *name = PyUnicode_AsUTF8(method_name);
PyObject **pp = NULL;
- if (strcmp(name, "iteritems") == 0) pp = &py_items;
- else if (strcmp(name, "iterkeys") == 0) pp = &py_keys;
- else if (strcmp(name, "itervalues") == 0) pp = &py_values;
- if (pp) {
- if (!*pp) {
- *pp = PyUnicode_FromString(name + 4);
- if (!*pp)
- return NULL;
+ if (method_name) {
+ const char *name = PyUnicode_AsUTF8(method_name);
+ if (strcmp(name, "iteritems") == 0) pp = &py_items;
+ else if (strcmp(name, "iterkeys") == 0) pp = &py_keys;
+ else if (strcmp(name, "itervalues") == 0) pp = &py_values;
+ if (pp) {
+ if (!*pp) {
+ *pp = PyUnicode_FromString(name + 4);
+ if (!*pp)
+ return NULL;
+ }
+ method_name = *pp;
}
- method_name = *pp;
}
#endif
}
@@ -395,6 +421,143 @@
}
+/////////////// set_iter.proto ///////////////
+
+static CYTHON_INLINE PyObject* __Pyx_set_iterator(PyObject* iterable, int is_set,
+ Py_ssize_t* p_orig_length, int* p_source_is_set); /*proto*/
+static CYTHON_INLINE int __Pyx_set_iter_next(
+ PyObject* iter_obj, Py_ssize_t orig_length,
+ Py_ssize_t* ppos, PyObject **value,
+ int source_is_set); /*proto*/
+
+/////////////// set_iter ///////////////
+//@requires: ObjectHandling.c::IterFinish
+
+static CYTHON_INLINE PyObject* __Pyx_set_iterator(PyObject* iterable, int is_set,
+ Py_ssize_t* p_orig_length, int* p_source_is_set) {
+#if CYTHON_COMPILING_IN_CPYTHON
+ is_set = is_set || likely(PySet_CheckExact(iterable) || PyFrozenSet_CheckExact(iterable));
+ *p_source_is_set = is_set;
+ if (likely(is_set)) {
+ *p_orig_length = PySet_Size(iterable);
+ Py_INCREF(iterable);
+ return iterable;
+ }
+#else
+ (void)is_set;
+ *p_source_is_set = 0;
+#endif
+ *p_orig_length = 0;
+ return PyObject_GetIter(iterable);
+}
+
+static CYTHON_INLINE int __Pyx_set_iter_next(
+ PyObject* iter_obj, Py_ssize_t orig_length,
+ Py_ssize_t* ppos, PyObject **value,
+ int source_is_set) {
+ if (!CYTHON_COMPILING_IN_CPYTHON || unlikely(!source_is_set)) {
+ *value = PyIter_Next(iter_obj);
+ if (unlikely(!*value)) {
+ return __Pyx_IterFinish();
+ }
+ (void)orig_length;
+ (void)ppos;
+ return 1;
+ }
+#if CYTHON_COMPILING_IN_CPYTHON
+ if (unlikely(PySet_GET_SIZE(iter_obj) != orig_length)) {
+ PyErr_SetString(
+ PyExc_RuntimeError,
+ "set changed size during iteration");
+ return -1;
+ }
+ {
+ Py_hash_t hash;
+ int ret = _PySet_NextEntry(iter_obj, ppos, value, &hash);
+ // CPython does not raise errors here, only if !isinstance(iter_obj, set/frozenset)
+ assert (ret != -1);
+ if (likely(ret)) {
+ Py_INCREF(*value);
+ return 1;
+ }
+ }
+#endif
+ return 0;
+}
+
+/////////////// py_set_discard_unhashable ///////////////
+//@requires: Builtins.c::pyfrozenset_new
+
+static int __Pyx_PySet_DiscardUnhashable(PyObject *set, PyObject *key) {
+ PyObject *tmpkey;
+ int rv;
+
+ if (likely(!PySet_Check(key) || !PyErr_ExceptionMatches(PyExc_TypeError)))
+ return -1;
+ PyErr_Clear();
+ tmpkey = __Pyx_PyFrozenSet_New(key);
+ if (tmpkey == NULL)
+ return -1;
+ rv = PySet_Discard(set, tmpkey);
+ Py_DECREF(tmpkey);
+ return rv;
+}
+
+
+/////////////// py_set_discard.proto ///////////////
+
+static CYTHON_INLINE int __Pyx_PySet_Discard(PyObject *set, PyObject *key); /*proto*/
+
+/////////////// py_set_discard ///////////////
+//@requires: py_set_discard_unhashable
+
+static CYTHON_INLINE int __Pyx_PySet_Discard(PyObject *set, PyObject *key) {
+ int found = PySet_Discard(set, key);
+ // Convert *key* to frozenset if necessary
+ if (unlikely(found < 0)) {
+ found = __Pyx_PySet_DiscardUnhashable(set, key);
+ }
+ // note: returns -1 on error, 0 (not found) or 1 (found) otherwise => error check for -1 or < 0 works
+ return found;
+}
+
+
+/////////////// py_set_remove.proto ///////////////
+
+static CYTHON_INLINE int __Pyx_PySet_Remove(PyObject *set, PyObject *key); /*proto*/
+
+/////////////// py_set_remove ///////////////
+//@requires: py_set_discard_unhashable
+
+static int __Pyx_PySet_RemoveNotFound(PyObject *set, PyObject *key, int found) {
+ // Convert *key* to frozenset if necessary
+ if (unlikely(found < 0)) {
+ found = __Pyx_PySet_DiscardUnhashable(set, key);
+ }
+ if (likely(found == 0)) {
+ // Not found
+ PyObject *tup;
+ tup = PyTuple_Pack(1, key);
+ if (!tup)
+ return -1;
+ PyErr_SetObject(PyExc_KeyError, tup);
+ Py_DECREF(tup);
+ return -1;
+ }
+ // note: returns -1 on error, 0 (not found) or 1 (found) otherwise => error check for -1 or < 0 works
+ return found;
+}
+
+static CYTHON_INLINE int __Pyx_PySet_Remove(PyObject *set, PyObject *key) {
+ int found = PySet_Discard(set, key);
+ if (unlikely(found != 1)) {
+ // note: returns -1 on error, 0 (not found) or 1 (found) otherwise => error check for -1 or < 0 works
+ return __Pyx_PySet_RemoveNotFound(set, key, found);
+ }
+ return 0;
+}
+
+
/////////////// unicode_iter.proto ///////////////
static CYTHON_INLINE int __Pyx_init_unicode_iteration(
@@ -437,7 +600,7 @@
static double __Pyx__PyObject_AsDouble(PyObject* obj) {
PyObject* float_value;
#if !CYTHON_USE_TYPE_SLOTS
- float_value = PyNumber_Float(obj); if (0) goto bad;
+ float_value = PyNumber_Float(obj); if ((0)) goto bad;
#else
PyNumberMethods *nb = Py_TYPE(obj)->tp_as_number;
if (likely(nb) && likely(nb->nb_float)) {
@@ -522,9 +685,11 @@
return PyLong_FromUnsignedLongLong(value);
#endif
} else {
- PyObject *one = PyInt_FromLong(1L);
+ PyObject *result, *one = PyInt_FromLong(1L);
if (unlikely(!one)) return NULL;
- return PyNumber_Lshift(one, exp);
+ result = PyNumber_Lshift(one, exp);
+ Py_DECREF(one);
+ return result;
}
} else if (shiftby == -1 && PyErr_Occurred()) {
PyErr_Clear();
@@ -535,13 +700,99 @@
}
+/////////////// PyIntCompare.proto ///////////////
+
+{{py: c_ret_type = 'PyObject*' if ret_type.is_pyobject else 'int'}}
+static CYTHON_INLINE {{c_ret_type}} __Pyx_PyInt_{{'' if ret_type.is_pyobject else 'Bool'}}{{op}}{{order}}(PyObject *op1, PyObject *op2, long intval, long inplace); /*proto*/
+
+/////////////// PyIntCompare ///////////////
+
+{{py: pyval, ival = ('op2', 'b') if order == 'CObj' else ('op1', 'a') }}
+{{py: c_ret_type = 'PyObject*' if ret_type.is_pyobject else 'int'}}
+{{py: return_true = 'Py_RETURN_TRUE' if ret_type.is_pyobject else 'return 1'}}
+{{py: return_false = 'Py_RETURN_FALSE' if ret_type.is_pyobject else 'return 0'}}
+{{py: slot_name = op.lower() }}
+{{py: c_op = {'Eq': '==', 'Ne': '!='}[op] }}
+{{py:
+return_compare = (
+ (lambda a,b,c_op, return_true=return_true, return_false=return_false: "if ({a} {c_op} {b}) {return_true}; else {return_false};".format(
+ a=a, b=b, c_op=c_op, return_true=return_true, return_false=return_false))
+ if ret_type.is_pyobject else
+ (lambda a,b,c_op: "return ({a} {c_op} {b});".format(a=a, b=b, c_op=c_op))
+ )
+}}
+
+static CYTHON_INLINE {{c_ret_type}} __Pyx_PyInt_{{'' if ret_type.is_pyobject else 'Bool'}}{{op}}{{order}}(PyObject *op1, PyObject *op2, CYTHON_UNUSED long intval, CYTHON_UNUSED long inplace) {
+ if (op1 == op2) {
+ {{return_true if op == 'Eq' else return_false}};
+ }
+
+ #if PY_MAJOR_VERSION < 3
+ if (likely(PyInt_CheckExact({{pyval}}))) {
+ const long {{'a' if order == 'CObj' else 'b'}} = intval;
+ long {{ival}} = PyInt_AS_LONG({{pyval}});
+ {{return_compare('a', 'b', c_op)}}
+ }
+ #endif
+
+ #if CYTHON_USE_PYLONG_INTERNALS
+ if (likely(PyLong_CheckExact({{pyval}}))) {
+ int unequal;
+ unsigned long uintval;
+ Py_ssize_t size = Py_SIZE({{pyval}});
+ const digit* digits = ((PyLongObject*){{pyval}})->ob_digit;
+ if (intval == 0) {
+ // == 0 => Py_SIZE(pyval) == 0
+ {{return_compare('size', '0', c_op)}}
+ } else if (intval < 0) {
+ // < 0 => Py_SIZE(pyval) < 0
+ if (size >= 0)
+ {{return_false if op == 'Eq' else return_true}};
+ // both are negative => can use absolute values now.
+ intval = -intval;
+ size = -size;
+ } else {
+ // > 0 => Py_SIZE(pyval) > 0
+ if (size <= 0)
+ {{return_false if op == 'Eq' else return_true}};
+ }
+ // After checking that the sign is the same (and excluding 0), now compare the absolute values.
+ // When inlining, the C compiler should select exactly one line from this unrolled loop.
+ uintval = (unsigned long) intval;
+ {{for _size in range(4, 0, -1)}}
+#if PyLong_SHIFT * {{_size}} < SIZEOF_LONG*8
+ if (uintval >> (PyLong_SHIFT * {{_size}})) {
+ // The C integer value is between (PyLong_BASE ** _size) and MIN(PyLong_BASE ** _size, LONG_MAX).
+ unequal = (size != {{_size+1}}) || (digits[0] != (uintval & (unsigned long) PyLong_MASK))
+ {{for _i in range(1, _size+1)}} | (digits[{{_i}}] != ((uintval >> ({{_i}} * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)){{endfor}};
+ } else
+#endif
+ {{endfor}}
+ unequal = (size != 1) || (((unsigned long) digits[0]) != (uintval & (unsigned long) PyLong_MASK));
+
+ {{return_compare('unequal', '0', c_op)}}
+ }
+ #endif
+
+ if (PyFloat_CheckExact({{pyval}})) {
+ const long {{'a' if order == 'CObj' else 'b'}} = intval;
+ double {{ival}} = PyFloat_AS_DOUBLE({{pyval}});
+ {{return_compare('(double)a', '(double)b', c_op)}}
+ }
+
+ return {{'' if ret_type.is_pyobject else '__Pyx_PyObject_IsTrueAndDecref'}}(
+ PyObject_RichCompare(op1, op2, Py_{{op.upper()}}));
+}
+
+
/////////////// PyIntBinop.proto ///////////////
+{{py: c_ret_type = 'PyObject*' if ret_type.is_pyobject else 'int'}}
#if !CYTHON_COMPILING_IN_PYPY
-static PyObject* __Pyx_PyInt_{{op}}{{order}}(PyObject *op1, PyObject *op2, long intval, int inplace); /*proto*/
+static {{c_ret_type}} __Pyx_PyInt_{{'' if ret_type.is_pyobject else 'Bool'}}{{op}}{{order}}(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); /*proto*/
#else
-#define __Pyx_PyInt_{{op}}{{order}}(op1, op2, intval, inplace) \
- {{if op in ('Eq', 'Ne')}}PyObject_RichCompare(op1, op2, Py_{{op.upper()}})
+#define __Pyx_PyInt_{{'' if ret_type.is_pyobject else 'Bool'}}{{op}}{{order}}(op1, op2, intval, inplace, zerodivision_check) \
+ {{if op in ('Eq', 'Ne')}}{{'' if ret_type.is_pyobject else '__Pyx_PyObject_IsTrueAndDecref'}}(PyObject_RichCompare(op1, op2, Py_{{op.upper()}}))
{{else}}(inplace ? PyNumber_InPlace{{op}}(op1, op2) : PyNumber_{{op}}(op1, op2))
{{endif}}
#endif
@@ -551,7 +802,12 @@
#if !CYTHON_COMPILING_IN_PYPY
{{py: from Cython.Utility import pylong_join }}
{{py: pyval, ival = ('op2', 'b') if order == 'CObj' else ('op1', 'a') }}
+{{py: c_ret_type = 'PyObject*' if ret_type.is_pyobject else 'int'}}
+{{py: return_true = 'Py_RETURN_TRUE' if ret_type.is_pyobject else 'return 1'}}
+{{py: return_false = 'Py_RETURN_FALSE' if ret_type.is_pyobject else 'return 0'}}
{{py: slot_name = {'TrueDivide': 'true_divide', 'FloorDivide': 'floor_divide'}.get(op, op.lower()) }}
+{{py: cfunc_name = '__Pyx_PyInt_%s%s%s' % ('' if ret_type.is_pyobject else 'Bool', op, order)}}
+{{py: zerodiv_check = lambda operand, _cfunc_name=cfunc_name: '%s_ZeroDivisionError(%s)' % (_cfunc_name, operand)}}
{{py:
c_op = {
'Add': '+', 'Subtract': '-', 'Remainder': '%', 'TrueDivide': '/', 'FloorDivide': '/',
@@ -560,10 +816,24 @@
}[op]
}}
-static PyObject* __Pyx_PyInt_{{op}}{{order}}(PyObject *op1, PyObject *op2, CYTHON_UNUSED long intval, CYTHON_UNUSED int inplace) {
+{{if op in ('TrueDivide', 'FloorDivide', 'Remainder')}}
+#if PY_MAJOR_VERSION < 3 || CYTHON_USE_PYLONG_INTERNALS
+#define {{zerodiv_check('operand')}} \
+ if (unlikely(zerodivision_check && ((operand) == 0))) { \
+ PyErr_SetString(PyExc_ZeroDivisionError, "integer division{{if op == 'Remainder'}} or modulo{{endif}} by zero"); \
+ return NULL; \
+ }
+#endif
+{{endif}}
+
+static {{c_ret_type}} {{cfunc_name}}(PyObject *op1, PyObject *op2, CYTHON_UNUSED long intval, int inplace, int zerodivision_check) {
+ // Prevent "unused" warnings.
+ (void)inplace;
+ (void)zerodivision_check;
+
{{if op in ('Eq', 'Ne')}}
if (op1 == op2) {
- Py_RETURN_{{'TRUE' if op == 'Eq' else 'FALSE'}};
+ {{return_true if op == 'Eq' else return_false}};
}
{{endif}}
@@ -577,9 +847,9 @@
{{if op in ('Eq', 'Ne')}}
if (a {{c_op}} b) {
- Py_RETURN_TRUE;
+ {{return_true}};
} else {
- Py_RETURN_FALSE;
+ {{return_false}};
}
{{elif c_op in '+-'}}
// adapted from intobject.c in Py2.7:
@@ -589,18 +859,21 @@
return PyInt_FromLong(x);
return PyLong_Type.tp_as_number->nb_{{slot_name}}(op1, op2);
{{elif c_op == '%'}}
+ {{zerodiv_check('b')}}
// see ExprNodes.py :: mod_int_utility_code
x = a % b;
x += ((x != 0) & ((x ^ b) < 0)) * b;
return PyInt_FromLong(x);
{{elif op == 'TrueDivide'}}
+ {{zerodiv_check('b')}}
if (8 * sizeof(long) <= 53 || likely(labs({{ival}}) <= ((PY_LONG_LONG)1 << 53))) {
return PyFloat_FromDouble((double)a / (double)b);
}
// let Python do the rounding
return PyInt_Type.tp_as_number->nb_{{slot_name}}(op1, op2);
{{elif op == 'FloorDivide'}}
- // INT_MIN / -1 is the only case that overflows
+ // INT_MIN / -1 is the only case that overflows, b == 0 is an error case
+ {{zerodiv_check('b')}}
if (unlikely(b == -1 && ((unsigned long)a) == 0-(unsigned long)a))
return PyInt_Type.tp_as_number->nb_{{slot_name}}(op1, op2);
else {
@@ -656,16 +929,18 @@
{{endif}}
}
// if size doesn't fit into a long or PY_LONG_LONG anymore, fall through to default
+ CYTHON_FALLTHROUGH;
{{endfor}}
{{endfor}}
{{if op in ('Eq', 'Ne')}}
#if PyLong_SHIFT < 30 && PyLong_SHIFT != 15
// unusual setup - your fault
- default: return PyLong_Type.tp_richcompare({{'op1, op2' if order == 'ObjC' else 'op2, op1'}}, Py_{{op.upper()}});
+ default: return {{'' if ret_type.is_pyobject else '__Pyx_PyObject_IsTrueAndDecref'}}(
+ PyLong_Type.tp_richcompare({{'op1, op2' if order == 'ObjC' else 'op2, op1'}}, Py_{{op.upper()}}));
#else
// too large for the long values we allow => definitely not equal
- default: Py_RETURN_{{'FALSE' if op == 'Eq' else 'TRUE'}};
+ default: {{return_false if op == 'Eq' else return_true}};
#endif
{{else}}
default: return PyLong_Type.tp_as_number->nb_{{slot_name}}(op1, op2);
@@ -674,22 +949,25 @@
}
{{if op in ('Eq', 'Ne')}}
if (a {{c_op}} b) {
- Py_RETURN_TRUE;
+ {{return_true}};
} else {
- Py_RETURN_FALSE;
+ {{return_false}};
}
{{else}}
{{if c_op == '%'}}
+ {{zerodiv_check('b')}}
// see ExprNodes.py :: mod_int_utility_code
x = a % b;
x += ((x != 0) & ((x ^ b) < 0)) * b;
{{elif op == 'TrueDivide'}}
+ {{zerodiv_check('b')}}
if ((8 * sizeof(long) <= 53 || likely(labs({{ival}}) <= ((PY_LONG_LONG)1 << 53)))
- || __Pyx_sst_abs(size) <= 52 / PyLong_SHIFT) {
+ || __Pyx_sst_abs(size) <= 52 / PyLong_SHIFT) {
return PyFloat_FromDouble((double)a / (double)b);
}
return PyLong_Type.tp_as_number->nb_{{slot_name}}(op1, op2);
{{elif op == 'FloorDivide'}}
+ {{zerodiv_check('b')}}
{
long q, r;
// see ExprNodes.py :: div_int_utility_code
@@ -748,12 +1026,18 @@
double {{ival}} = PyFloat_AS_DOUBLE({{pyval}});
{{if op in ('Eq', 'Ne')}}
if ((double)a {{c_op}} (double)b) {
- Py_RETURN_TRUE;
+ {{return_true}};
} else {
- Py_RETURN_FALSE;
+ {{return_false}};
}
{{else}}
double result;
+ {{if op == 'TrueDivide'}}
+ if (unlikely(zerodivision_check && b == 0)) {
+ PyErr_SetString(PyExc_ZeroDivisionError, "float division by zero");
+ return NULL;
+ }
+ {{endif}}
// copied from floatobject.c in Py3.5:
PyFPE_START_PROTECT("{{op.lower() if not op.endswith('Divide') else 'divide'}}", return NULL)
result = ((double)a) {{c_op}} (double)b;
@@ -764,7 +1048,8 @@
{{endif}}
{{if op in ('Eq', 'Ne')}}
- return PyObject_RichCompare(op1, op2, Py_{{op.upper()}});
+ return {{'' if ret_type.is_pyobject else '__Pyx_PyObject_IsTrueAndDecref'}}(
+ PyObject_RichCompare(op1, op2, Py_{{op.upper()}}));
{{else}}
return (inplace ? PyNumber_InPlace{{op}} : PyNumber_{{op}})(op1, op2);
{{endif}}
@@ -773,11 +1058,12 @@
/////////////// PyFloatBinop.proto ///////////////
+{{py: c_ret_type = 'PyObject*' if ret_type.is_pyobject else 'int'}}
#if !CYTHON_COMPILING_IN_PYPY
-static PyObject* __Pyx_PyFloat_{{op}}{{order}}(PyObject *op1, PyObject *op2, double floatval, int inplace); /*proto*/
+static {{c_ret_type}} __Pyx_PyFloat_{{'' if ret_type.is_pyobject else 'Bool'}}{{op}}{{order}}(PyObject *op1, PyObject *op2, double floatval, int inplace, int zerodivision_check); /*proto*/
#else
-#define __Pyx_PyFloat_{{op}}{{order}}(op1, op2, floatval, inplace) \
- {{if op in ('Eq', 'Ne')}}PyObject_RichCompare(op1, op2, Py_{{op.upper()}})
+#define __Pyx_PyFloat_{{'' if ret_type.is_pyobject else 'Bool'}}{{op}}{{order}}(op1, op2, floatval, inplace, zerodivision_check) \
+ {{if op in ('Eq', 'Ne')}}{{'' if ret_type.is_pyobject else '__Pyx_PyObject_IsTrueAndDecref'}}(PyObject_RichCompare(op1, op2, Py_{{op.upper()}}))
{{elif op == 'Divide'}}((inplace ? __Pyx_PyNumber_InPlaceDivide(op1, op2) : __Pyx_PyNumber_Divide(op1, op2)))
{{else}}(inplace ? PyNumber_InPlace{{op}}(op1, op2) : PyNumber_{{op}}(op1, op2))
{{endif}}
@@ -787,7 +1073,12 @@
#if !CYTHON_COMPILING_IN_PYPY
{{py: from Cython.Utility import pylong_join }}
+{{py: c_ret_type = 'PyObject*' if ret_type.is_pyobject else 'int'}}
+{{py: return_true = 'Py_RETURN_TRUE' if ret_type.is_pyobject else 'return 1'}}
+{{py: return_false = 'Py_RETURN_FALSE' if ret_type.is_pyobject else 'return 0'}}
{{py: pyval, fval = ('op2', 'b') if order == 'CObj' else ('op1', 'a') }}
+{{py: cfunc_name = '__Pyx_PyFloat_%s%s%s' % ('' if ret_type.is_pyobject else 'Bool', op, order) }}
+{{py: zerodiv_check = lambda operand, _cfunc_name=cfunc_name: '%s_ZeroDivisionError(%s)' % (_cfunc_name, operand)}}
{{py:
c_op = {
'Add': '+', 'Subtract': '-', 'TrueDivide': '/', 'Divide': '/', 'Remainder': '%',
@@ -795,23 +1086,35 @@
}[op]
}}
-static PyObject* __Pyx_PyFloat_{{op}}{{order}}(PyObject *op1, PyObject *op2, double floatval, CYTHON_UNUSED int inplace) {
+{{if order == 'CObj' and c_op in '%/'}}
+#define {{zerodiv_check('operand')}} if (unlikely(zerodivision_check && ((operand) == 0))) { \
+ PyErr_SetString(PyExc_ZeroDivisionError, "float division{{if op == 'Remainder'}} or modulo{{endif}} by zero"); \
+ return NULL; \
+}
+{{endif}}
+
+static {{c_ret_type}} {{cfunc_name}}(PyObject *op1, PyObject *op2, double floatval, int inplace, int zerodivision_check) {
const double {{'a' if order == 'CObj' else 'b'}} = floatval;
double {{fval}}{{if op not in ('Eq', 'Ne')}}, result{{endif}};
+ // Prevent "unused" warnings.
+ (void)inplace;
+ (void)zerodivision_check;
{{if op in ('Eq', 'Ne')}}
if (op1 == op2) {
- Py_RETURN_{{'TRUE' if op == 'Eq' else 'FALSE'}};
+ {{return_true if op == 'Eq' else return_false}};
}
{{endif}}
if (likely(PyFloat_CheckExact({{pyval}}))) {
{{fval}} = PyFloat_AS_DOUBLE({{pyval}});
+ {{if order == 'CObj' and c_op in '%/'}}{{zerodiv_check(fval)}}{{endif}}
} else
#if PY_MAJOR_VERSION < 3
if (likely(PyInt_CheckExact({{pyval}}))) {
{{fval}} = (double) PyInt_AS_LONG({{pyval}});
+ {{if order == 'CObj' and c_op in '%/'}}{{zerodiv_check(fval)}}{{endif}}
} else
#endif
@@ -820,7 +1123,7 @@
const digit* digits = ((PyLongObject*){{pyval}})->ob_digit;
const Py_ssize_t size = Py_SIZE({{pyval}});
switch (size) {
- case 0: {{fval}} = 0.0; break;
+ case 0: {{if order == 'CObj' and c_op in '%/'}}{{zerodiv_check('0')}}{{else}}{{fval}} = 0.0;{{endif}} break;
case -1: {{fval}} = -(double) digits[0]; break;
case 1: {{fval}} = (double) digits[0]; break;
{{for _size in (2, 3, 4)}}
@@ -840,21 +1143,25 @@
// check above. However, the number of digits that CPython uses for a given PyLong
// value is minimal, and together with the "(size-1) * SHIFT < 53" check above,
// this should make it safe.
+ CYTHON_FALLTHROUGH;
{{endfor}}
default:
#else
{
#endif
{{if op in ('Eq', 'Ne')}}
- return PyFloat_Type.tp_richcompare({{'op1, op2' if order == 'CObj' else 'op2, op1'}}, Py_{{op.upper()}});
+ return {{'' if ret_type.is_pyobject else '__Pyx_PyObject_IsTrueAndDecref'}}(
+ PyFloat_Type.tp_richcompare({{'op1, op2' if order == 'CObj' else 'op2, op1'}}, Py_{{op.upper()}}));
{{else}}
{{fval}} = PyLong_AsDouble({{pyval}});
if (unlikely({{fval}} == -1.0 && PyErr_Occurred())) return NULL;
+ {{if order == 'CObj' and c_op in '%/'}}{{zerodiv_check(fval)}}{{endif}}
{{endif}}
}
} else {
{{if op in ('Eq', 'Ne')}}
- return PyObject_RichCompare(op1, op2, Py_{{op.upper()}});
+ return {{'' if ret_type.is_pyobject else '__Pyx_PyObject_IsTrueAndDecref'}}(
+ PyObject_RichCompare(op1, op2, Py_{{op.upper()}}));
{{elif op == 'Divide'}}
return (inplace ? __Pyx_PyNumber_InPlaceDivide(op1, op2) : __Pyx_PyNumber_Divide(op1, op2));
{{else}}
@@ -864,12 +1171,13 @@
{{if op in ('Eq', 'Ne')}}
if (a {{c_op}} b) {
- Py_RETURN_TRUE;
+ {{return_true}};
} else {
- Py_RETURN_FALSE;
+ {{return_false}};
}
{{else}}
// copied from floatobject.c in Py3.5:
+ {{if order == 'CObj' and c_op in '%/'}}{{zerodiv_check('b')}}{{endif}}
PyFPE_START_PROTECT("{{op.lower() if not op.endswith('Divide') else 'divide'}}", return NULL)
{{if c_op == '%'}}
result = fmod(a, b);
diff -Nru cython-0.26.1/Cython/Utility/Overflow.c cython-0.29.14/Cython/Utility/Overflow.c
--- cython-0.26.1/Cython/Utility/Overflow.c 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Utility/Overflow.c 2019-02-27 12:23:19.000000000 +0000
@@ -1,7 +1,7 @@
/*
These functions provide integer arithmetic with integer checking. They do not
actually raise an exception when an overflow is detected, but rather set a bit
-in the overflow parameter. (This parameter may be re-used accross several
+in the overflow parameter. (This parameter may be re-used across several
arithmetic operations, so should be or-ed rather than assigned to.)
The implementation is divided into two parts, the signed and unsigned basecases,
@@ -47,8 +47,12 @@
#define __Pyx_div_const_no_overflow(a, b, overflow) ((a) / (b))
/////////////// Common.init ///////////////
+//@substitute: naming
-__Pyx_check_twos_complement();
+// FIXME: Propagate the error here instead of just printing it.
+if (unlikely(__Pyx_check_twos_complement())) {
+ PyErr_WriteUnraisable($module_cname);
+}
/////////////// BaseCaseUnsigned.proto ///////////////
@@ -226,8 +230,12 @@
/////////////// SizeCheck.init ///////////////
+//@substitute: naming
-__Pyx_check_sane_{{NAME}}();
+// FIXME: Propagate the error here instead of just printing it.
+if (unlikely(__Pyx_check_sane_{{NAME}}())) {
+ PyErr_WriteUnraisable($module_cname);
+}
/////////////// SizeCheck.proto ///////////////
diff -Nru cython-0.26.1/Cython/Utility/Profile.c cython-0.29.14/Cython/Utility/Profile.c
--- cython-0.26.1/Cython/Utility/Profile.c 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Utility/Profile.c 2019-05-28 19:54:08.000000000 +0000
@@ -1,4 +1,5 @@
/////////////// Profile.proto ///////////////
+//@requires: Exceptions.c::PyErrFetchRestore
//@substitute: naming
// Note that cPython ignores PyTrace_EXCEPTION,
@@ -60,10 +61,10 @@
if (CYTHON_TRACE_NOGIL) { \
PyThreadState *tstate; \
PyGILState_STATE state = PyGILState_Ensure(); \
- tstate = PyThreadState_GET(); \
+ tstate = __Pyx_PyThreadState_Current; \
if (unlikely(tstate->use_tracing) && !tstate->tracing && \
(tstate->c_profilefunc || (CYTHON_TRACE && tstate->c_tracefunc))) { \
- __Pyx_use_tracing = __Pyx_TraceSetupAndCall(&$frame_code_cname, &$frame_cname, funcname, srcfile, firstlineno); \
+ __Pyx_use_tracing = __Pyx_TraceSetupAndCall(&$frame_code_cname, &$frame_cname, tstate, funcname, srcfile, firstlineno); \
} \
PyGILState_Release(state); \
if (unlikely(__Pyx_use_tracing < 0)) goto_error; \
@@ -72,7 +73,7 @@
PyThreadState* tstate = PyThreadState_GET(); \
if (unlikely(tstate->use_tracing) && !tstate->tracing && \
(tstate->c_profilefunc || (CYTHON_TRACE && tstate->c_tracefunc))) { \
- __Pyx_use_tracing = __Pyx_TraceSetupAndCall(&$frame_code_cname, &$frame_cname, funcname, srcfile, firstlineno); \
+ __Pyx_use_tracing = __Pyx_TraceSetupAndCall(&$frame_code_cname, &$frame_cname, tstate, funcname, srcfile, firstlineno); \
if (unlikely(__Pyx_use_tracing < 0)) goto_error; \
} \
}
@@ -81,7 +82,7 @@
{ PyThreadState* tstate = PyThreadState_GET(); \
if (unlikely(tstate->use_tracing) && !tstate->tracing && \
(tstate->c_profilefunc || (CYTHON_TRACE && tstate->c_tracefunc))) { \
- __Pyx_use_tracing = __Pyx_TraceSetupAndCall(&$frame_code_cname, &$frame_cname, funcname, srcfile, firstlineno); \
+ __Pyx_use_tracing = __Pyx_TraceSetupAndCall(&$frame_code_cname, &$frame_cname, tstate, funcname, srcfile, firstlineno); \
if (unlikely(__Pyx_use_tracing < 0)) goto_error; \
} \
}
@@ -89,7 +90,7 @@
#define __Pyx_TraceException() \
if (likely(!__Pyx_use_tracing)); else { \
- PyThreadState* tstate = PyThreadState_GET(); \
+ PyThreadState* tstate = __Pyx_PyThreadState_Current; \
if (tstate->use_tracing && \
(tstate->c_profilefunc || (CYTHON_TRACE && tstate->c_tracefunc))) { \
tstate->tracing++; \
@@ -110,7 +111,7 @@
static void __Pyx_call_return_trace_func(PyThreadState *tstate, PyFrameObject *frame, PyObject *result) {
PyObject *type, *value, *traceback;
- PyErr_Fetch(&type, &value, &traceback);
+ __Pyx_ErrFetchInState(tstate, &type, &value, &traceback);
tstate->tracing++;
tstate->use_tracing = 0;
if (CYTHON_TRACE && tstate->c_tracefunc)
@@ -120,7 +121,7 @@
CYTHON_FRAME_DEL(frame);
tstate->use_tracing = 1;
tstate->tracing--;
- PyErr_Restore(type, value, traceback);
+ __Pyx_ErrRestoreInState(tstate, type, value, traceback);
}
#ifdef WITH_THREAD
@@ -130,14 +131,14 @@
if (CYTHON_TRACE_NOGIL) { \
PyThreadState *tstate; \
PyGILState_STATE state = PyGILState_Ensure(); \
- tstate = PyThreadState_GET(); \
+ tstate = __Pyx_PyThreadState_Current; \
if (tstate->use_tracing) { \
__Pyx_call_return_trace_func(tstate, $frame_cname, (PyObject*)result); \
} \
PyGILState_Release(state); \
} \
} else { \
- PyThreadState* tstate = PyThreadState_GET(); \
+ PyThreadState* tstate = __Pyx_PyThreadState_Current; \
if (tstate->use_tracing) { \
__Pyx_call_return_trace_func(tstate, $frame_cname, (PyObject*)result); \
} \
@@ -146,7 +147,7 @@
#else
#define __Pyx_TraceReturn(result, nogil) \
if (likely(!__Pyx_use_tracing)); else { \
- PyThreadState* tstate = PyThreadState_GET(); \
+ PyThreadState* tstate = __Pyx_PyThreadState_Current; \
if (tstate->use_tracing) { \
__Pyx_call_return_trace_func(tstate, $frame_cname, (PyObject*)result); \
} \
@@ -154,7 +155,7 @@
#endif
static PyCodeObject *__Pyx_createFrameCodeObject(const char *funcname, const char *srcfile, int firstlineno); /*proto*/
- static int __Pyx_TraceSetupAndCall(PyCodeObject** code, PyFrameObject** frame, const char *funcname, const char *srcfile, int firstlineno); /*proto*/
+ static int __Pyx_TraceSetupAndCall(PyCodeObject** code, PyFrameObject** frame, PyThreadState* tstate, const char *funcname, const char *srcfile, int firstlineno); /*proto*/
#else
@@ -172,7 +173,7 @@
static int __Pyx_call_line_trace_func(PyThreadState *tstate, PyFrameObject *frame, int lineno) {
int ret;
PyObject *type, *value, *traceback;
- PyErr_Fetch(&type, &value, &traceback);
+ __Pyx_ErrFetchInState(tstate, &type, &value, &traceback);
__Pyx_PyFrame_SetLineNumber(frame, lineno);
tstate->tracing++;
tstate->use_tracing = 0;
@@ -180,7 +181,7 @@
tstate->use_tracing = 1;
tstate->tracing--;
if (likely(!ret)) {
- PyErr_Restore(type, value, traceback);
+ __Pyx_ErrRestoreInState(tstate, type, value, traceback);
} else {
Py_XDECREF(type);
Py_XDECREF(value);
@@ -197,16 +198,16 @@
int ret = 0; \
PyThreadState *tstate; \
PyGILState_STATE state = PyGILState_Ensure(); \
- tstate = PyThreadState_GET(); \
- if (unlikely(tstate->use_tracing && tstate->c_tracefunc)) { \
+ tstate = __Pyx_PyThreadState_Current; \
+ if (unlikely(tstate->use_tracing && tstate->c_tracefunc && $frame_cname->f_trace)) { \
ret = __Pyx_call_line_trace_func(tstate, $frame_cname, lineno); \
} \
PyGILState_Release(state); \
if (unlikely(ret)) goto_error; \
} \
} else { \
- PyThreadState* tstate = PyThreadState_GET(); \
- if (unlikely(tstate->use_tracing && tstate->c_tracefunc)) { \
+ PyThreadState* tstate = __Pyx_PyThreadState_Current; \
+ if (unlikely(tstate->use_tracing && tstate->c_tracefunc && $frame_cname->f_trace)) { \
int ret = __Pyx_call_line_trace_func(tstate, $frame_cname, lineno); \
if (unlikely(ret)) goto_error; \
} \
@@ -215,8 +216,8 @@
#else
#define __Pyx_TraceLine(lineno, nogil, goto_error) \
if (likely(!__Pyx_use_tracing)); else { \
- PyThreadState* tstate = PyThreadState_GET(); \
- if (unlikely(tstate->use_tracing && tstate->c_tracefunc)) { \
+ PyThreadState* tstate = __Pyx_PyThreadState_Current; \
+ if (unlikely(tstate->use_tracing && tstate->c_tracefunc && $frame_cname->f_trace)) { \
int ret = __Pyx_call_line_trace_func(tstate, $frame_cname, lineno); \
if (unlikely(ret)) goto_error; \
} \
@@ -234,12 +235,12 @@
static int __Pyx_TraceSetupAndCall(PyCodeObject** code,
PyFrameObject** frame,
+ PyThreadState* tstate,
const char *funcname,
const char *srcfile,
int firstlineno) {
PyObject *type, *value, *traceback;
int retval;
- PyThreadState* tstate = PyThreadState_GET();
if (*frame == NULL || !CYTHON_PROFILE_REUSE_FRAME) {
if (*code == NULL) {
*code = __Pyx_createFrameCodeObject(funcname, srcfile, firstlineno);
@@ -266,7 +267,7 @@
retval = 1;
tstate->tracing++;
tstate->use_tracing = 0;
- PyErr_Fetch(&type, &value, &traceback);
+ __Pyx_ErrFetchInState(tstate, &type, &value, &traceback);
#if CYTHON_TRACE
if (tstate->c_tracefunc)
retval = tstate->c_tracefunc(tstate->c_traceobj, *frame, PyTrace_CALL, NULL) == 0;
@@ -277,7 +278,7 @@
(CYTHON_TRACE && tstate->c_tracefunc));
tstate->tracing--;
if (retval) {
- PyErr_Restore(type, value, traceback);
+ __Pyx_ErrRestoreInState(tstate, type, value, traceback);
return tstate->use_tracing && retval;
} else {
Py_XDECREF(type);
@@ -288,27 +289,29 @@
}
static PyCodeObject *__Pyx_createFrameCodeObject(const char *funcname, const char *srcfile, int firstlineno) {
+ PyCodeObject *py_code = 0;
+
+#if PY_MAJOR_VERSION >= 3
+ py_code = PyCode_NewEmpty(srcfile, funcname, firstlineno);
+ // make CPython use a fresh dict for "f_locals" at need (see GH #1836)
+ if (likely(py_code)) {
+ py_code->co_flags |= CO_OPTIMIZED | CO_NEWLOCALS;
+ }
+#else
PyObject *py_srcfile = 0;
PyObject *py_funcname = 0;
- PyCodeObject *py_code = 0;
- #if PY_MAJOR_VERSION < 3
py_funcname = PyString_FromString(funcname);
+ if (unlikely(!py_funcname)) goto bad;
py_srcfile = PyString_FromString(srcfile);
- #else
- py_funcname = PyUnicode_FromString(funcname);
- py_srcfile = PyUnicode_FromString(srcfile);
- #endif
- if (!py_funcname | !py_srcfile) goto bad;
+ if (unlikely(!py_srcfile)) goto bad;
py_code = PyCode_New(
0, /*int argcount,*/
- #if PY_MAJOR_VERSION >= 3
- 0, /*int kwonlyargcount,*/
- #endif
0, /*int nlocals,*/
0, /*int stacksize,*/
- 0, /*int flags,*/
+ // make CPython use a fresh dict for "f_locals" at need (see GH #1836)
+ CO_OPTIMIZED | CO_NEWLOCALS, /*int flags,*/
$empty_bytes, /*PyObject *code,*/
$empty_tuple, /*PyObject *consts,*/
$empty_tuple, /*PyObject *names,*/
@@ -324,6 +327,7 @@
bad:
Py_XDECREF(py_srcfile);
Py_XDECREF(py_funcname);
+#endif
return py_code;
}
diff -Nru cython-0.26.1/Cython/Utility/StringTools.c cython-0.29.14/Cython/Utility/StringTools.c
--- cython-0.26.1/Cython/Utility/StringTools.c 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/Cython/Utility/StringTools.c 2019-01-19 09:25:16.000000000 +0000
@@ -40,7 +40,7 @@
return -1;
// initialise cached hash value
if (PyObject_Hash(*t->p) == -1)
- PyErr_Clear();
+ return -1;
++t;
}
return 0;
@@ -63,10 +63,31 @@
//////////////////// PyUCS4InUnicode.proto ////////////////////
static CYTHON_INLINE int __Pyx_UnicodeContainsUCS4(PyObject* unicode, Py_UCS4 character); /*proto*/
-static CYTHON_INLINE int __Pyx_PyUnicodeBufferContainsUCS4(Py_UNICODE* buffer, Py_ssize_t length, Py_UCS4 character); /*proto*/
//////////////////// PyUCS4InUnicode ////////////////////
+static int __Pyx_PyUnicodeBufferContainsUCS4_SP(Py_UNICODE* buffer, Py_ssize_t length, Py_UCS4 character) {
+ /* handle surrogate pairs for Py_UNICODE buffers in 16bit Unicode builds */
+ Py_UNICODE high_val, low_val;
+ Py_UNICODE* pos;
+ high_val = (Py_UNICODE) (0xD800 | (((character - 0x10000) >> 10) & ((1<<10)-1)));
+ low_val = (Py_UNICODE) (0xDC00 | ( (character - 0x10000) & ((1<<10)-1)));
+ for (pos=buffer; pos < buffer+length-1; pos++) {
+ if (unlikely((high_val == pos[0]) & (low_val == pos[1]))) return 1;
+ }
+ return 0;
+}
+
+static int __Pyx_PyUnicodeBufferContainsUCS4_BMP(Py_UNICODE* buffer, Py_ssize_t length, Py_UCS4 character) {
+ Py_UNICODE uchar;
+ Py_UNICODE* pos;
+ uchar = (Py_UNICODE) character;
+ for (pos=buffer; pos < buffer+length; pos++) {
+ if (unlikely(uchar == pos[0])) return 1;
+ }
+ return 0;
+}
+
static CYTHON_INLINE int __Pyx_UnicodeContainsUCS4(PyObject* unicode, Py_UCS4 character) {
#if CYTHON_PEP393_ENABLED
const int kind = PyUnicode_KIND(unicode);
@@ -80,32 +101,18 @@
return 0;
}
#endif
- return __Pyx_PyUnicodeBufferContainsUCS4(
- PyUnicode_AS_UNICODE(unicode),
- PyUnicode_GET_SIZE(unicode),
- character);
-}
+ if (Py_UNICODE_SIZE == 2 && unlikely(character > 65535)) {
+ return __Pyx_PyUnicodeBufferContainsUCS4_SP(
+ PyUnicode_AS_UNICODE(unicode),
+ PyUnicode_GET_SIZE(unicode),
+ character);
+ } else {
+ return __Pyx_PyUnicodeBufferContainsUCS4_BMP(
+ PyUnicode_AS_UNICODE(unicode),
+ PyUnicode_GET_SIZE(unicode),
+ character);
-static CYTHON_INLINE int __Pyx_PyUnicodeBufferContainsUCS4(Py_UNICODE* buffer, Py_ssize_t length, Py_UCS4 character) {
- Py_UNICODE uchar;
- Py_UNICODE* pos;
- #if Py_UNICODE_SIZE == 2
- if (character > 65535) {
- /* handle surrogate pairs for Py_UNICODE buffers in 16bit Unicode builds */
- Py_UNICODE high_val, low_val;
- high_val = (Py_UNICODE) (0xD800 | (((character - 0x10000) >> 10) & ((1<<10)-1)));
- low_val = (Py_UNICODE) (0xDC00 | ( (character - 0x10000) & ((1<<10)-1)));
- for (pos=buffer; pos < buffer+length-1; pos++) {
- if (unlikely(high_val == pos[0]) & unlikely(low_val == pos[1])) return 1;
- }
- return 0;
}
- #endif
- uchar = (Py_UNICODE) character;
- for (pos=buffer; pos < buffer+length; pos++) {
- if (unlikely(uchar == pos[0])) return 1;
- }
- return 0;
}
@@ -228,6 +235,9 @@
} else {
int result;
PyObject* py_result = PyObject_RichCompare(s1, s2, equals);
+ #if PY_MAJOR_VERSION < 3
+ Py_XDECREF(owned_ref);
+ #endif
if (!py_result)
return -1;
result = __Pyx_PyObject_IsTrue(py_result);
@@ -321,7 +331,7 @@
if (wraparound | boundscheck) {
length = PyByteArray_GET_SIZE(string);
if (wraparound & unlikely(i < 0)) i += length;
- if ((!boundscheck) || likely((0 <= i) & (i < length))) {
+ if ((!boundscheck) || likely(__Pyx_is_valid_index(i, length))) {
return (unsigned char) (PyByteArray_AS_STRING(string)[i]);
} else {
PyErr_SetString(PyExc_IndexError, "bytearray index out of range");
@@ -351,7 +361,7 @@
if (wraparound | boundscheck) {
length = PyByteArray_GET_SIZE(string);
if (wraparound & unlikely(i < 0)) i += length;
- if ((!boundscheck) || likely((0 <= i) & (i < length))) {
+ if ((!boundscheck) || likely(__Pyx_is_valid_index(i, length))) {
PyByteArray_AS_STRING(string)[i] = (char) v;
return 0;
} else {
@@ -384,7 +394,7 @@
if (wraparound | boundscheck) {
length = __Pyx_PyUnicode_GET_LENGTH(ustring);
if (wraparound & unlikely(i < 0)) i += length;
- if ((!boundscheck) || likely((0 <= i) & (i < length))) {
+ if ((!boundscheck) || likely(__Pyx_is_valid_index(i, length))) {
return __Pyx_PyUnicode_READ_CHAR(ustring, i);
} else {
PyErr_SetString(PyExc_IndexError, "string index out of range");
@@ -577,9 +587,8 @@
/////////////// unicode_tailmatch.proto ///////////////
-static int __Pyx_PyUnicode_Tailmatch(PyObject* s, PyObject* substr,
- Py_ssize_t start, Py_ssize_t end, int direction); /*proto*/
-
+static int __Pyx_PyUnicode_Tailmatch(
+ PyObject* s, PyObject* substr, Py_ssize_t start, Py_ssize_t end, int direction); /*proto*/
/////////////// unicode_tailmatch ///////////////
@@ -587,26 +596,31 @@
// tuple of prefixes/suffixes, whereas it's much more common to
// test for a single unicode string.
-static int __Pyx_PyUnicode_Tailmatch(PyObject* s, PyObject* substr,
- Py_ssize_t start, Py_ssize_t end, int direction) {
- if (unlikely(PyTuple_Check(substr))) {
- Py_ssize_t i, count = PyTuple_GET_SIZE(substr);
- for (i = 0; i < count; i++) {
- Py_ssize_t result;
+static int __Pyx_PyUnicode_TailmatchTuple(PyObject* s, PyObject* substrings,
+ Py_ssize_t start, Py_ssize_t end, int direction) {
+ Py_ssize_t i, count = PyTuple_GET_SIZE(substrings);
+ for (i = 0; i < count; i++) {
+ Py_ssize_t result;
#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- result = PyUnicode_Tailmatch(s, PyTuple_GET_ITEM(substr, i),
- start, end, direction);
+ result = PyUnicode_Tailmatch(s, PyTuple_GET_ITEM(substrings, i),
+ start, end, direction);
#else
- PyObject* sub = PySequence_ITEM(substr, i);
- if (unlikely(!sub)) return -1;
- result = PyUnicode_Tailmatch(s, sub, start, end, direction);
- Py_DECREF(sub);
+ PyObject* sub = PySequence_ITEM(substrings, i);
+ if (unlikely(!sub)) return -1;
+ result = PyUnicode_Tailmatch(s, sub, start, end, direction);
+ Py_DECREF(sub);
#endif
- if (result) {
- return (int) result;
- }
+ if (result) {
+ return (int) result;
}
- return 0;
+ }
+ return 0;
+}
+
+static int __Pyx_PyUnicode_Tailmatch(PyObject* s, PyObject* substr,
+ Py_ssize_t start, Py_ssize_t end, int direction) {
+ if (unlikely(PyTuple_Check(substr))) {
+ return __Pyx_PyUnicode_TailmatchTuple(s, substr, start, end, direction);
}
return (int) PyUnicode_Tailmatch(s, substr, start, end, direction);
}
@@ -677,26 +691,31 @@
return retval;
}
-static int __Pyx_PyBytes_Tailmatch(PyObject* self, PyObject* substr,
- Py_ssize_t start, Py_ssize_t end, int direction) {
- if (unlikely(PyTuple_Check(substr))) {
- Py_ssize_t i, count = PyTuple_GET_SIZE(substr);
- for (i = 0; i < count; i++) {
- int result;
+static int __Pyx_PyBytes_TailmatchTuple(PyObject* self, PyObject* substrings,
+ Py_ssize_t start, Py_ssize_t end, int direction) {
+ Py_ssize_t i, count = PyTuple_GET_SIZE(substrings);
+ for (i = 0; i < count; i++) {
+ int result;
#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- result = __Pyx_PyBytes_SingleTailmatch(self, PyTuple_GET_ITEM(substr, i),
- start, end, direction);
+ result = __Pyx_PyBytes_SingleTailmatch(self, PyTuple_GET_ITEM(substrings, i),
+ start, end, direction);
#else
- PyObject* sub = PySequence_ITEM(substr, i);
- if (unlikely(!sub)) return -1;
- result = __Pyx_PyBytes_SingleTailmatch(self, sub, start, end, direction);
- Py_DECREF(sub);
+ PyObject* sub = PySequence_ITEM(substrings, i);
+ if (unlikely(!sub)) return -1;
+ result = __Pyx_PyBytes_SingleTailmatch(self, sub, start, end, direction);
+ Py_DECREF(sub);
#endif
- if (result) {
- return result;
- }
+ if (result) {
+ return result;
}
- return 0;
+ }
+ return 0;
+}
+
+static int __Pyx_PyBytes_Tailmatch(PyObject* self, PyObject* substr,
+ Py_ssize_t start, Py_ssize_t end, int direction) {
+ if (unlikely(PyTuple_Check(substr))) {
+ return __Pyx_PyBytes_TailmatchTuple(self, substr, start, end, direction);
}
return __Pyx_PyBytes_SingleTailmatch(self, substr, start, end, direction);
@@ -733,15 +752,15 @@
/////////////// bytes_index ///////////////
static CYTHON_INLINE char __Pyx_PyBytes_GetItemInt(PyObject* bytes, Py_ssize_t index, int check_bounds) {
+ if (index < 0)
+ index += PyBytes_GET_SIZE(bytes);
if (check_bounds) {
Py_ssize_t size = PyBytes_GET_SIZE(bytes);
- if (unlikely(index >= size) | ((index < 0) & unlikely(index < -size))) {
+ if (unlikely(!__Pyx_is_valid_index(index, size))) {
PyErr_SetString(PyExc_IndexError, "string index out of range");
return (char) -1;
}
}
- if (index < 0)
- index += PyBytes_GET_SIZE(bytes);
return PyBytes_AS_STRING(bytes)[index];
}
@@ -822,9 +841,9 @@
ukind = __Pyx_PyUnicode_KIND(uval);
udata = __Pyx_PyUnicode_DATA(uval);
if (!CYTHON_PEP393_ENABLED || ukind == result_ukind) {
- memcpy((char *)result_udata + char_pos * result_ukind, udata, ulength * result_ukind);
+ memcpy((char *)result_udata + char_pos * result_ukind, udata, (size_t) (ulength * result_ukind));
} else {
- #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030300F0
+ #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030300F0 || defined(_PyUnicode_FastCopyCharacters)
_PyUnicode_FastCopyCharacters(result_uval, char_pos, uval, 0, ulength);
#else
Py_ssize_t j;
@@ -896,8 +915,8 @@
#else
// non-CPython
{
- uval = NULL;
PyObject *sign = NULL, *padding = NULL;
+ uval = NULL;
if (uoffset > 0) {
prepend_sign = !!prepend_sign;
if (uoffset > prepend_sign) {
@@ -971,7 +990,7 @@
{
// CPython calls PyNumber_Index() internally
ival = __Pyx_PyIndex_AsSsize_t(value);
- if (unlikely((ival < 0) | (ival > 255))) {
+ if (unlikely(!__Pyx_is_valid_index(ival, 256))) {
if (ival == -1 && PyErr_Occurred())
return -1;
goto bad_range;
@@ -993,7 +1012,7 @@
static CYTHON_INLINE int __Pyx_PyByteArray_Append(PyObject* bytearray, int value) {
PyObject *pyval, *retval;
#if CYTHON_COMPILING_IN_CPYTHON
- if (likely((value >= 0) & (value <= 255))) {
+ if (likely(__Pyx_is_valid_index(value, 256))) {
Py_ssize_t n = Py_SIZE(bytearray);
if (likely(n != PY_SSIZE_T_MAX)) {
if (unlikely(PyByteArray_Resize(bytearray, n + 1) < 0))
@@ -1120,3 +1139,27 @@
Py_DECREF(s);
return result;
}
+
+
+//////////////////// PyUnicode_Unicode.proto ////////////////////
+
+static CYTHON_INLINE PyObject* __Pyx_PyUnicode_Unicode(PyObject *obj);/*proto*/
+
+//////////////////// PyUnicode_Unicode ////////////////////
+
+static CYTHON_INLINE PyObject* __Pyx_PyUnicode_Unicode(PyObject *obj) {
+ if (unlikely(obj == Py_None))
+ obj = PYUNICODE("None");
+ return __Pyx_NewRef(obj);
+}
+
+
+//////////////////// PyObject_Unicode.proto ////////////////////
+
+#if PY_MAJOR_VERSION >= 3
+#define __Pyx_PyObject_Unicode(obj) \
+ (likely(PyUnicode_CheckExact(obj)) ? __Pyx_NewRef(obj) : PyObject_Str(obj))
+#else
+#define __Pyx_PyObject_Unicode(obj) \
+ (likely(PyUnicode_CheckExact(obj)) ? __Pyx_NewRef(obj) : PyObject_Unicode(obj))
+#endif
diff -Nru cython-0.26.1/Cython/Utility/TypeConversion.c cython-0.29.14/Cython/Utility/TypeConversion.c
--- cython-0.26.1/Cython/Utility/TypeConversion.c 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Utility/TypeConversion.c 2019-02-27 12:23:19.000000000 +0000
@@ -16,6 +16,14 @@
(is_signed || likely(v < (type)PY_SSIZE_T_MAX || \
v == (type)PY_SSIZE_T_MAX))) )
+static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) {
+ // Optimisation from Section 14.2 "Bounds Checking" in
+ // https://www.agner.org/optimize/optimizing_cpp.pdf
+ // See https://bugs.python.org/issue28397
+ // The cast to unsigned effectively tests for "0 <= i < limit".
+ return (size_t) i < (size_t) limit;
+}
+
// fast and unsafe abs(Py_ssize_t) that ignores the overflow for (-PY_SSIZE_T_MAX-1)
#if defined (__cplusplus) && __cplusplus >= 201103L
#include
@@ -24,10 +32,10 @@
#define __Pyx_sst_abs(value) abs(value)
#elif SIZEOF_LONG >= SIZEOF_SIZE_T
#define __Pyx_sst_abs(value) labs(value)
-#elif defined (_MSC_VER) && defined (_M_X64)
+#elif defined (_MSC_VER)
// abs() is defined for long, but 64-bits type on MSVC is long long.
// Use MS-specific _abs64 instead.
- #define __Pyx_sst_abs(value) _abs64(value)
+ #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value))
#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
#define __Pyx_sst_abs(value) llabs(value)
#elif defined (__GNUC__)
@@ -54,6 +62,12 @@
#define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize
#endif
+#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s))
#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s))
#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s))
#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s))
@@ -65,16 +79,12 @@
#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s)
#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s)
-#if PY_MAJOR_VERSION < 3
-static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u)
-{
+// There used to be a Py_UNICODE_strlen() in CPython 3.x, but it is deprecated since Py3.3.
+static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) {
const Py_UNICODE *u_end = u;
while (*u_end++) ;
return (size_t)(u_end - u - 1);
}
-#else
-#define __Pyx_Py_UNICODE_strlen Py_UNICODE_strlen
-#endif
#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u))
#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode
@@ -82,10 +92,14 @@
#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj)
#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None)
-#define __Pyx_PyBool_FromLong(b) ((b) ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False))
+static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b);
static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*);
+static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*);
static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x);
+#define __Pyx_PySequence_Tuple(obj) \
+ (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj))
+
static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*);
static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t);
@@ -172,7 +186,7 @@
if (!default_encoding) goto bad;
default_encoding_c = PyBytes_AsString(default_encoding);
if (!default_encoding_c) goto bad;
- __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c));
+ __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1);
if (!__PYX_DEFAULT_STRING_ENCODING) goto bad;
strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c);
Py_DECREF(default_encoding);
@@ -198,51 +212,61 @@
return __Pyx_PyObject_AsStringAndSize(o, &ignore);
}
-// Py3.7 returns a "const char*" for unicode strings
-static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) {
-#if CYTHON_COMPILING_IN_CPYTHON && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)
- if (
-#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
- __Pyx_sys_getdefaultencoding_not_ascii &&
-#endif
- PyUnicode_Check(o)) {
-#if PY_VERSION_HEX < 0x03030000
- char* defenc_c;
- // borrowed reference, cached internally in 'o' by CPython
- PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL);
- if (!defenc) return NULL;
- defenc_c = PyBytes_AS_STRING(defenc);
+#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT
+#if !CYTHON_PEP393_ENABLED
+static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) {
+ char* defenc_c;
+ // borrowed reference, cached internally in 'o' by CPython
+ PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL);
+ if (!defenc) return NULL;
+ defenc_c = PyBytes_AS_STRING(defenc);
#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
- {
- char* end = defenc_c + PyBytes_GET_SIZE(defenc);
- char* c;
- for (c = defenc_c; c < end; c++) {
- if ((unsigned char) (*c) >= 128) {
- // raise the error
- PyUnicode_AsASCIIString(o);
- return NULL;
- }
+ {
+ char* end = defenc_c + PyBytes_GET_SIZE(defenc);
+ char* c;
+ for (c = defenc_c; c < end; c++) {
+ if ((unsigned char) (*c) >= 128) {
+ // raise the error
+ PyUnicode_AsASCIIString(o);
+ return NULL;
}
}
+ }
#endif /*__PYX_DEFAULT_STRING_ENCODING_IS_ASCII*/
- *length = PyBytes_GET_SIZE(defenc);
- return defenc_c;
-#else /* PY_VERSION_HEX < 0x03030000 */
- if (__Pyx_PyUnicode_READY(o) == -1) return NULL;
+ *length = PyBytes_GET_SIZE(defenc);
+ return defenc_c;
+}
+
+#else /* CYTHON_PEP393_ENABLED: */
+
+static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) {
+ if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL;
#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
- if (PyUnicode_IS_ASCII(o)) {
- // cached for the lifetime of the object
- *length = PyUnicode_GET_LENGTH(o);
- return PyUnicode_AsUTF8(o);
- } else {
- // raise the error
- PyUnicode_AsASCIIString(o);
- return NULL;
- }
+ if (likely(PyUnicode_IS_ASCII(o))) {
+ // cached for the lifetime of the object
+ *length = PyUnicode_GET_LENGTH(o);
+ return PyUnicode_AsUTF8(o);
+ } else {
+ // raise the error
+ PyUnicode_AsASCIIString(o);
+ return NULL;
+ }
#else /* __PYX_DEFAULT_STRING_ENCODING_IS_ASCII */
- return PyUnicode_AsUTF8AndSize(o, length);
+ return PyUnicode_AsUTF8AndSize(o, length);
#endif /* __PYX_DEFAULT_STRING_ENCODING_IS_ASCII */
-#endif /* PY_VERSION_HEX < 0x03030000 */
+}
+#endif /* CYTHON_PEP393_ENABLED */
+#endif
+
+// Py3.7 returns a "const char*" for unicode strings
+static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) {
+#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT
+ if (
+#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
+ __Pyx_sys_getdefaultencoding_not_ascii &&
+#endif
+ PyUnicode_Check(o)) {
+ return __Pyx_PyUnicode_AsStringAndSize(o, length);
} else
#endif /* __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT */
@@ -270,6 +294,36 @@
else return PyObject_IsTrue(x);
}
+static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) {
+ int retval;
+ if (unlikely(!x)) return -1;
+ retval = __Pyx_PyObject_IsTrue(x);
+ Py_DECREF(x);
+ return retval;
+}
+
+static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) {
+#if PY_MAJOR_VERSION >= 3
+ if (PyLong_Check(result)) {
+ // CPython issue #17576: warn if 'result' not of exact type int.
+ if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1,
+ "__int__ returned non-int (type %.200s). "
+ "The ability to return an instance of a strict subclass of int "
+ "is deprecated, and may be removed in a future version of Python.",
+ Py_TYPE(result)->tp_name)) {
+ Py_DECREF(result);
+ return NULL;
+ }
+ return result;
+ }
+#endif
+ PyErr_Format(PyExc_TypeError,
+ "__%.4s__ returned non-%.4s (type %.200s)",
+ type_name, type_name, Py_TYPE(result)->tp_name);
+ Py_DECREF(result);
+ return NULL;
+}
+
static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) {
#if CYTHON_USE_TYPE_SLOTS
PyNumberMethods *m;
@@ -277,9 +331,9 @@
const char *name = NULL;
PyObject *res = NULL;
#if PY_MAJOR_VERSION < 3
- if (PyInt_Check(x) || PyLong_Check(x))
+ if (likely(PyInt_Check(x) || PyLong_Check(x)))
#else
- if (PyLong_Check(x))
+ if (likely(PyLong_Check(x)))
#endif
return __Pyx_NewRef(x);
#if CYTHON_USE_TYPE_SLOTS
@@ -287,32 +341,30 @@
#if PY_MAJOR_VERSION < 3
if (m && m->nb_int) {
name = "int";
- res = PyNumber_Int(x);
+ res = m->nb_int(x);
}
else if (m && m->nb_long) {
name = "long";
- res = PyNumber_Long(x);
+ res = m->nb_long(x);
}
#else
- if (m && m->nb_int) {
+ if (likely(m && m->nb_int)) {
name = "int";
- res = PyNumber_Long(x);
+ res = m->nb_int(x);
}
#endif
#else
- res = PyNumber_Int(x);
+ if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) {
+ res = PyNumber_Int(x);
+ }
#endif
- if (res) {
+ if (likely(res)) {
#if PY_MAJOR_VERSION < 3
- if (!PyInt_Check(res) && !PyLong_Check(res)) {
+ if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) {
#else
- if (!PyLong_Check(res)) {
+ if (unlikely(!PyLong_CheckExact(res))) {
#endif
- PyErr_Format(PyExc_TypeError,
- "__%.4s__ returned non-%.4s (type %.200s)",
- name, name, Py_TYPE(res)->tp_name);
- Py_DECREF(res);
- return NULL;
+ return __Pyx_PyNumber_IntOrLongWrongResultType(res, name);
}
}
else if (!PyErr_Occurred()) {
@@ -332,7 +384,7 @@
if (sizeof(Py_ssize_t) >= sizeof(long))
return PyInt_AS_LONG(b);
else
- return PyInt_AsSsize_t(x);
+ return PyInt_AsSsize_t(b);
}
#endif
if (likely(PyLong_CheckExact(b))) {
@@ -367,6 +419,12 @@
return ival;
}
+
+static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) {
+ return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False);
+}
+
+
static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) {
return PyInt_FromSize_t(ival);
}
@@ -479,18 +537,23 @@
/////////////// ObjectAsUCS4 ///////////////
-static Py_UCS4 __Pyx__PyObject_AsPy_UCS4(PyObject* x) {
- long ival;
- ival = __Pyx_PyInt_As_long(x);
- if (unlikely(ival < 0)) {
+static Py_UCS4 __Pyx__PyObject_AsPy_UCS4_raise_error(long ival) {
+ if (ival < 0) {
if (!PyErr_Occurred())
PyErr_SetString(PyExc_OverflowError,
"cannot convert negative value to Py_UCS4");
- return (Py_UCS4)-1;
- } else if (unlikely(ival > 1114111)) {
+ } else {
PyErr_SetString(PyExc_OverflowError,
"value too large to convert to Py_UCS4");
- return (Py_UCS4)-1;
+ }
+ return (Py_UCS4)-1;
+}
+
+static Py_UCS4 __Pyx__PyObject_AsPy_UCS4(PyObject* x) {
+ long ival;
+ ival = __Pyx_PyInt_As_long(x);
+ if (unlikely(!__Pyx_is_valid_index(ival, 1114111 + 1))) {
+ return __Pyx__PyObject_AsPy_UCS4_raise_error(ival);
}
return (Py_UCS4)ival;
}
@@ -532,14 +595,16 @@
#endif
ival = __Pyx_PyInt_As_long(x);
}
- if (unlikely(ival < 0)) {
- if (!PyErr_Occurred())
+ if (unlikely(!__Pyx_is_valid_index(ival, maxval + 1))) {
+ if (ival < 0) {
+ if (!PyErr_Occurred())
+ PyErr_SetString(PyExc_OverflowError,
+ "cannot convert negative value to Py_UNICODE");
+ return (Py_UNICODE)-1;
+ } else {
PyErr_SetString(PyExc_OverflowError,
- "cannot convert negative value to Py_UNICODE");
- return (Py_UNICODE)-1;
- } else if (unlikely(ival > maxval)) {
- PyErr_SetString(PyExc_OverflowError,
- "value too large to convert to Py_UNICODE");
+ "value too large to convert to Py_UNICODE");
+ }
return (Py_UNICODE)-1;
}
return (Py_UNICODE)ival;
@@ -553,7 +618,7 @@
/////////////// CIntToPy ///////////////
static CYTHON_INLINE PyObject* {{TO_PY_FUNCTION}}({{TYPE}} value) {
- const {{TYPE}} neg_one = ({{TYPE}}) -1, const_zero = ({{TYPE}}) 0;
+ const {{TYPE}} neg_one = ({{TYPE}}) (({{TYPE}}) 0 - ({{TYPE}}) 1), const_zero = ({{TYPE}}) 0;
const int is_unsigned = neg_one > const_zero;
if (is_unsigned) {
if (sizeof({{TYPE}}) < sizeof(long)) {
@@ -610,7 +675,8 @@
};
static const char DIGITS_HEX[2*16+1] = {
- "0123456789abcdef0123456789ABCDEF"
+ "0123456789abcdef"
+ "0123456789ABCDEF"
};
@@ -636,58 +702,68 @@
// NOTE: inlining because most arguments are constant, which collapses lots of code below
+// GCC diagnostic pragmas were introduced in GCC 4.6
+#if defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6))
+#define GCC_DIAGNOSTIC
+#endif
static CYTHON_INLINE PyObject* {{TO_PY_FUNCTION}}({{TYPE}} value, Py_ssize_t width, char padding_char, char format_char) {
// simple and conservative C string allocation on the stack: each byte gives at most 3 digits, plus sign
char digits[sizeof({{TYPE}})*3+2];
// 'dpos' points to end of digits array + 1 initially to allow for pre-decrement looping
char *dpos, *end = digits + sizeof({{TYPE}})*3+2;
const char *hex_digits = DIGITS_HEX;
- Py_ssize_t ulength;
- int length, prepend_sign, last_one_off;
+ Py_ssize_t length, ulength;
+ int prepend_sign, last_one_off;
{{TYPE}} remaining;
+#ifdef GCC_DIAGNOSTIC
+#pragma GCC diagnostic push
+#pragma GCC diagnostic ignored "-Wconversion"
+#endif
const {{TYPE}} neg_one = ({{TYPE}}) -1, const_zero = ({{TYPE}}) 0;
+#ifdef GCC_DIAGNOSTIC
+#pragma GCC diagnostic pop
+#endif
const int is_unsigned = neg_one > const_zero;
if (format_char == 'X') {
hex_digits += 16;
format_char = 'x';
- };
+ }
// surprise: even trivial sprintf() calls don't get optimised in gcc (4.8)
remaining = value; /* not using abs(value) to avoid overflow problems */
last_one_off = 0;
dpos = end;
- while (remaining != 0) {
+ do {
int digit_pos;
switch (format_char) {
case 'o':
digit_pos = abs((int)(remaining % (8*8)));
- remaining = remaining / (8*8);
+ remaining = ({{TYPE}}) (remaining / (8*8));
dpos -= 2;
- *(uint16_t*)dpos = ((uint16_t*)DIGIT_PAIRS_8)[digit_pos]; /* copy 2 digits at a time */
+ *(uint16_t*)dpos = ((const uint16_t*)DIGIT_PAIRS_8)[digit_pos]; /* copy 2 digits at a time */
last_one_off = (digit_pos < 8);
break;
case 'd':
digit_pos = abs((int)(remaining % (10*10)));
- remaining = remaining / (10*10);
+ remaining = ({{TYPE}}) (remaining / (10*10));
dpos -= 2;
- *(uint16_t*)dpos = ((uint16_t*)DIGIT_PAIRS_10)[digit_pos]; /* copy 2 digits at a time */
+ *(uint16_t*)dpos = ((const uint16_t*)DIGIT_PAIRS_10)[digit_pos]; /* copy 2 digits at a time */
last_one_off = (digit_pos < 10);
break;
case 'x':
*(--dpos) = hex_digits[abs((int)(remaining % 16))];
- remaining = remaining / 16;
+ remaining = ({{TYPE}}) (remaining / 16);
break;
default:
assert(0);
break;
}
- }
+ } while (unlikely(remaining != 0));
+
if (last_one_off) {
assert(*dpos == '0');
dpos++;
- } else if (unlikely(dpos == end)) {
- *(--dpos) = '0';
}
length = end - dpos;
ulength = length;
@@ -708,7 +784,7 @@
if (ulength == 1) {
return PyUnicode_FromOrdinal(*dpos);
}
- return __Pyx_PyUnicode_BuildFromAscii(ulength, dpos, length, prepend_sign, padding_char);
+ return __Pyx_PyUnicode_BuildFromAscii(ulength, dpos, (int) length, prepend_sign, padding_char);
}
@@ -775,7 +851,7 @@
{{py: from Cython.Utility import pylong_join }}
static CYTHON_INLINE {{TYPE}} {{FROM_PY_FUNCTION}}(PyObject *x) {
- const {{TYPE}} neg_one = ({{TYPE}}) -1, const_zero = ({{TYPE}}) 0;
+ const {{TYPE}} neg_one = ({{TYPE}}) (({{TYPE}}) 0 - ({{TYPE}}) 1), const_zero = ({{TYPE}}) 0;
const int is_unsigned = neg_one > const_zero;
#if PY_MAJOR_VERSION < 3
if (likely(PyInt_Check(x))) {
diff -Nru cython-0.26.1/Cython/Utils.py cython-0.29.14/Cython/Utils.py
--- cython-0.26.1/Cython/Utils.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Cython/Utils.py 2019-11-01 14:13:39.000000000 +0000
@@ -10,6 +10,11 @@
except ImportError:
basestring = str
+try:
+ FileNotFoundError
+except NameError:
+ FileNotFoundError = OSError
+
import os
import sys
import re
@@ -20,9 +25,14 @@
modification_time = os.path.getmtime
+_function_caches = []
+def clear_function_caches():
+ for cache in _function_caches:
+ cache.clear()
def cached_function(f):
cache = {}
+ _function_caches.append(cache)
uncomputed = object()
def wrapper(*args):
res = cache.get(args, uncomputed)
@@ -114,54 +124,6 @@
@cached_function
-def search_include_directories(dirs, qualified_name, suffix, pos,
- include=False, sys_path=False):
- # Search the list of include directories for the given
- # file name. If a source file position is given, first
- # searches the directory containing that file. Returns
- # None if not found, but does not report an error.
- # The 'include' option will disable package dereferencing.
- # If 'sys_path' is True, also search sys.path.
- if sys_path:
- dirs = dirs + tuple(sys.path)
- if pos:
- file_desc = pos[0]
- from Cython.Compiler.Scanning import FileSourceDescriptor
- if not isinstance(file_desc, FileSourceDescriptor):
- raise RuntimeError("Only file sources for code supported")
- if include:
- dirs = (os.path.dirname(file_desc.filename),) + dirs
- else:
- dirs = (find_root_package_dir(file_desc.filename),) + dirs
-
- dotted_filename = qualified_name
- if suffix:
- dotted_filename += suffix
- if not include:
- names = qualified_name.split('.')
- package_names = tuple(names[:-1])
- module_name = names[-1]
- module_filename = module_name + suffix
- package_filename = "__init__" + suffix
-
- for dir in dirs:
- path = os.path.join(dir, dotted_filename)
- if path_exists(path):
- return path
- if not include:
- package_dir = check_package_dir(dir, package_names)
- if package_dir is not None:
- path = os.path.join(package_dir, module_filename)
- if path_exists(path):
- return path
- path = os.path.join(dir, package_dir, module_name,
- package_filename)
- if path_exists(path):
- return path
- return None
-
-
-@cached_function
def find_root_package_dir(file_path):
dir = os.path.dirname(file_path)
if file_path == dir:
@@ -228,43 +190,28 @@
# support for source file encoding detection
-_match_file_encoding = re.compile(u"coding[:=]\s*([-\w.]+)").search
-
-
-def detect_file_encoding(source_filename):
- f = open_source_file(source_filename, encoding="UTF-8", error_handling='ignore')
- try:
- return detect_opened_file_encoding(f)
- finally:
- f.close()
+_match_file_encoding = re.compile(br"(\w*coding)[:=]\s*([-\w.]+)").search
def detect_opened_file_encoding(f):
# PEPs 263 and 3120
- # Most of the time the first two lines fall in the first 250 chars,
+ # Most of the time the first two lines fall in the first couple of hundred chars,
# and this bulk read/split is much faster.
- lines = f.read(250).split(u"\n")
- if len(lines) > 1:
- m = _match_file_encoding(lines[0])
+ lines = ()
+ start = b''
+ while len(lines) < 3:
+ data = f.read(500)
+ start += data
+ lines = start.split(b"\n")
+ if not data:
+ break
+ m = _match_file_encoding(lines[0])
+ if m and m.group(1) != b'c_string_encoding':
+ return m.group(2).decode('iso8859-1')
+ elif len(lines) > 1:
+ m = _match_file_encoding(lines[1])
if m:
- return m.group(1)
- elif len(lines) > 2:
- m = _match_file_encoding(lines[1])
- if m:
- return m.group(1)
- else:
- return "UTF-8"
- # Fallback to one-char-at-a-time detection.
- f.seek(0)
- chars = []
- for i in range(2):
- c = f.read(1)
- while c and c != u'\n':
- chars.append(c)
- c = f.read(1)
- encoding = _match_file_encoding(u''.join(chars))
- if encoding:
- return encoding.group(1)
+ return m.group(2).decode('iso8859-1')
return "UTF-8"
@@ -278,32 +225,33 @@
f.seek(0)
-def open_source_file(source_filename, mode="r",
- encoding=None, error_handling=None):
- if encoding is None:
- # Most of the time the coding is unspecified, so be optimistic that
- # it's UTF-8.
- f = open_source_file(source_filename, encoding="UTF-8", mode=mode, error_handling='ignore')
- encoding = detect_opened_file_encoding(f)
- if encoding == "UTF-8" and error_handling == 'ignore':
+def open_source_file(source_filename, encoding=None, error_handling=None):
+ stream = None
+ try:
+ if encoding is None:
+ # Most of the time the encoding is not specified, so try hard to open the file only once.
+ f = io.open(source_filename, 'rb')
+ encoding = detect_opened_file_encoding(f)
f.seek(0)
- skip_bom(f)
- return f
+ stream = io.TextIOWrapper(f, encoding=encoding, errors=error_handling)
else:
- f.close()
+ stream = io.open(source_filename, encoding=encoding, errors=error_handling)
- if not os.path.exists(source_filename):
+ except OSError:
+ if os.path.exists(source_filename):
+ raise # File is there, but something went wrong reading from it.
+ # Allow source files to be in zip files etc.
try:
loader = __loader__
if source_filename.startswith(loader.archive):
- return open_source_from_loader(
+ stream = open_source_from_loader(
loader, source_filename,
encoding, error_handling)
except (NameError, AttributeError):
pass
- stream = io.open(source_filename, mode=mode,
- encoding=encoding, errors=error_handling)
+ if stream is None:
+ raise FileNotFoundError(source_filename)
skip_bom(stream)
return stream
@@ -355,7 +303,8 @@
@cached_function
def get_cython_cache_dir():
- """get the cython cache dir
+ r"""
+ Return the base directory containing Cython's caches.
Priority:
@@ -424,7 +373,9 @@
os.close(orig_stream)
-def print_bytes(s, end=b'\n', file=sys.stdout, flush=True):
+def print_bytes(s, header_text=None, end=b'\n', file=sys.stdout, flush=True):
+ if header_text:
+ file.write(header_text) # note: text! => file.write() instead of out.write()
file.flush()
try:
out = file.buffer # Py3
@@ -481,3 +432,33 @@
orig_vars.pop('__weakref__', None)
return metaclass(cls.__name__, cls.__bases__, orig_vars)
return wrapper
+
+
+def raise_error_if_module_name_forbidden(full_module_name):
+ #it is bad idea to call the pyx-file cython.pyx, so fail early
+ if full_module_name == 'cython' or full_module_name.startswith('cython.'):
+ raise ValueError('cython is a special module, cannot be used as a module name')
+
+
+def build_hex_version(version_string):
+ """
+ Parse and translate '4.3a1' into the readable hex representation '0x040300A1' (like PY_VERSION_HEX).
+ """
+ # First, parse '4.12a1' into [4, 12, 0, 0xA01].
+ digits = []
+ release_status = 0xF0
+ for digit in re.split('([.abrc]+)', version_string):
+ if digit in ('a', 'b', 'rc'):
+ release_status = {'a': 0xA0, 'b': 0xB0, 'rc': 0xC0}[digit]
+ digits = (digits + [0, 0])[:3] # 1.2a1 -> 1.2.0a1
+ elif digit != '.':
+ digits.append(int(digit))
+ digits = (digits + [0] * 3)[:4]
+ digits[3] += release_status
+
+ # Then, build a single hex value, two hex digits per version part.
+ hexversion = 0
+ for digit in digits:
+ hexversion = (hexversion << 8) + digit
+
+ return '0x%08X' % hexversion
diff -Nru cython-0.26.1/debian/changelog cython-0.29.14/debian/changelog
--- cython-0.26.1/debian/changelog 2017-11-07 11:28:03.000000000 +0000
+++ cython-0.29.14/debian/changelog 2020-06-24 02:02:14.000000000 +0000
@@ -1,3 +1,94 @@
+cython (0.29.14-0.1~18.04.sav0) bionic; urgency=medium
+
+ * Backport to Bionic
+
+ -- Rob Savoury Tue, 23 Jun 2020 19:02:14 -0700
+
+cython (0.29.14-0.1) unstable; urgency=medium
+
+ * Non-maintainer upload.
+ * New upstream version, more Python 3.8 fixes. Closes: #942696.
+
+ -- Matthias Klose Mon, 11 Nov 2019 13:16:50 +0100
+
+cython (0.29.13-0.1) unstable; urgency=medium
+
+ * Non-maintainer upload.
+ * New upstream version, supporting Python 3.8. Closes: #942696.
+ * Run the tests for every python version, but ignore failures for
+ a first build.
+
+ -- Matthias Klose Thu, 24 Oct 2019 21:58:37 +0200
+
+cython (0.29.2-2) unstable; urgency=medium
+
+ * Team upload to Debian unstable.
+ * Bump Standards-Version to 4.3.0.
+
+ -- Tobias Hansen Mon, 14 Jan 2019 12:19:20 +0000
+
+cython (0.29.2-1) experimental; urgency=medium
+
+ * Team upload to experimental.
+ * New upstream release. (Closes: #916355)
+ * Remove patches (applied upstream):
+ - fix-fused-test.patch
+ - drop-test.patch
+
+ -- Tobias Hansen Wed, 02 Jan 2019 16:56:36 +0100
+
+cython (0.28.4-1) unstable; urgency=medium
+
+ * New upstream release (Closes: #902551)
+ * Fix missing installation of some debug libraries. (Closes: #902784)
+ * debian/patches/fix-fused-test.patch:
+ fix invalid testcase crashing with gcc-8
+ * debian/patches/drop-test.patch:
+ drop test of behaviour changed in python3.7
+ Closes: #903024
+ * bump standards version
+ * switch to python3-sphinx
+
+ -- Julian Taylor Mon, 23 Jul 2018 17:58:49 +0000
+
+cython (0.28.2-4) unstable; urgency=medium
+
+ * Team upload to unstable.
+
+ -- Tobias Hansen Tue, 26 Jun 2018 11:00:22 +0200
+
+cython (0.28.2-3) experimental; urgency=medium
+
+ * Team upload.
+ * disable_tests.patch: Disable two tests that fail on several architectures.
+
+ -- Tobias Hansen Sun, 10 Jun 2018 17:51:01 +0200
+
+cython (0.28.2-2) experimental; urgency=medium
+
+ * Team upload.
+ * No-change upload to trigger rebuild.
+ Previous build seem to have failed due to outdated buildd chroots
+ after a gcc update.
+
+ -- Tobias Hansen Wed, 06 Jun 2018 10:37:29 +0200
+
+cython (0.28.2-1) experimental; urgency=medium
+
+ * Team upload to Debian experimental.
+ * Update Vcs-* fields for salsa git. (Closes: #895095)
+
+ [ Ondřej Nový ]
+ * d/copyright: Use https protocol in Format field
+ * d/control: Removing redundant Priority field in binary package
+ * d/watch: Use https protocol
+ * d/changelog: Remove trailing whitespaces
+
+ [ Julian Rüth ]
+ * New upstream release
+
+ -- Tobias Hansen Sun, 03 Jun 2018 17:00:54 +0200
+
cython (0.26.1-0.4) unstable; urgency=medium
* Non-maintainer upload.
@@ -96,7 +187,7 @@
- skip for now numpy_test test leading to FTBFS (Closes: #848753)
reported upstream: https://github.com/cython/cython/issues/1589
* debian/patches
- - debup_verify_resolution_GH1533 as a workaround for
+ - debup_verify_resolution_GH1533 as a workaround for
https://github.com/cython/cython/issues/158
-- Yaroslav Halchenko Mon, 23 Jan 2017 17:04:46 -0500
@@ -113,7 +204,7 @@
* Fresh upstream beta bugfix release (Closes: #842296)
* Tools/cython-mode.el installed under usr/share/emacs/site-lisp as a
- part of cython package (there is no -common yet, and python still
+ part of cython package (there is no -common yet, and python still
defaults to 2) (Closes: #794844)
* Pass --shard_count= to runtests.py to run tests in parallel to
speed up package build time. Although we do not parallelize build,
@@ -632,7 +723,7 @@
+ Bump cdbs version to matches policy.
[ Ondrej Certik ]
- * New upstream version
+ * New upstream version
-- Ondrej Certik Fri, 18 Jan 2008 00:21:42 +0100
diff -Nru cython-0.26.1/debian/control cython-0.29.14/debian/control
--- cython-0.26.1/debian/control 2017-11-07 11:28:03.000000000 +0000
+++ cython-0.29.14/debian/control 2019-01-14 12:19:13.000000000 +0000
@@ -8,16 +8,16 @@
dpkg-dev (>= 1.16.1~),
python-all-dev (>= 2.6.6-3~), python-all-dbg,
help2man (>= 1.37.1~),
- python-sphinx,
+ python3-sphinx,
python-numpy (>= 1:1.12.1-3.1) ,
python3-all-dev,
python3-all-dbg,
python3-numpy (>= 1:1.12.1-3.1) ,
gdb,
-Standards-Version: 3.9.8
+Standards-Version: 4.3.0
Homepage: http://cython.org/
-Vcs-Svn: svn://anonscm.debian.org/python-apps/packages/cython/trunk/
-Vcs-Browser: http://anonscm.debian.org/viewvc/python-apps/packages/cython/trunk/
+Vcs-Git: https://salsa.debian.org/python-team/applications/cython.git
+Vcs-Browser: https://salsa.debian.org/python-team/applications/cython
Package: cython
Architecture: any
@@ -40,7 +40,6 @@
Package: cython-dbg
Architecture: any
Section: debug
-Priority: optional
Depends: ${python:Depends}, ${misc:Depends}, ${shlibs:Depends}, cython (= ${binary:Version})
Description: C-Extensions for Python - debug build
This package contains Cython libraries built against versions of
@@ -67,7 +66,6 @@
Package: cython3-dbg
Architecture: any
Section: debug
-Priority: optional
Depends: ${python3:Depends}, ${misc:Depends}, ${shlibs:Depends}, cython3 (= ${binary:Version})
Description: C-Extensions for Python 3 - debug build
This package contains Cython libraries built against versions of
diff -Nru cython-0.26.1/debian/copyright cython-0.29.14/debian/copyright
--- cython-0.26.1/debian/copyright 2017-08-24 20:35:31.000000000 +0000
+++ cython-0.29.14/debian/copyright 2018-11-11 09:16:16.000000000 +0000
@@ -1,4 +1,4 @@
-Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: cython
Upstream-Contact: cython-devel@python.org
Source: http://github.com/cython/cython
diff -Nru cython-0.26.1/debian/gbp.conf cython-0.29.14/debian/gbp.conf
--- cython-0.26.1/debian/gbp.conf 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/debian/gbp.conf 2018-11-11 09:16:16.000000000 +0000
@@ -0,0 +1,2 @@
+[DEFAULT]
+debian-branch=debian/master
diff -Nru cython-0.26.1/debian/patches/deb_disable_googleanalytics cython-0.29.14/debian/patches/deb_disable_googleanalytics
--- cython-0.26.1/debian/patches/deb_disable_googleanalytics 2017-08-24 20:35:31.000000000 +0000
+++ cython-0.29.14/debian/patches/deb_disable_googleanalytics 2018-11-11 09:16:16.000000000 +0000
@@ -1,11 +1,17 @@
From: Yaroslav Halchenko
+Date: Tue, 25 Mar 2014 13:04:01 -0400
Subject: Disable google analytics calls
See http://lintian.debian.org/tags/privacy-breach-google-adsense.html for reasoning
Origin: Debian
Last-Update: 2014-03-25
+---
+ docs/_templates/layout.html | 12 +++---------
+ 1 file changed, 3 insertions(+), 9 deletions(-)
+diff --git a/docs/_templates/layout.html b/docs/_templates/layout.html
+index a071c96..074eaa6 100644
--- a/docs/_templates/layout.html
+++ b/docs/_templates/layout.html
@@ -2,13 +2,7 @@
diff -Nru cython-0.26.1/debian/patches/deb_nopngmath cython-0.29.14/debian/patches/deb_nopngmath
--- cython-0.26.1/debian/patches/deb_nopngmath 2017-08-24 20:35:31.000000000 +0000
+++ cython-0.29.14/debian/patches/deb_nopngmath 2018-11-11 09:16:16.000000000 +0000
@@ -1,3 +1,14 @@
+From: Python Applications Packaging Team
+
+Date: Thu, 11 Aug 2016 19:56:27 +0000
+Subject: deb_nopngmath
+
+---
+ docs/conf.py | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/docs/conf.py b/docs/conf.py
+index eee2cc2..9c83df4 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -41,7 +41,7 @@ highlight_language = 'cython'
@@ -7,5 +18,5 @@
- 'sphinx.ext.pngmath',
+ # 'sphinx.ext.pngmath',
'sphinx.ext.todo',
- 'sphinx.ext.intersphinx'
- ]
+ 'sphinx.ext.intersphinx',
+ 'sphinx.ext.autodoc'
diff -Nru cython-0.26.1/debian/patches/debup_workaround_verify_resolution_GH1533 cython-0.29.14/debian/patches/debup_workaround_verify_resolution_GH1533
--- cython-0.26.1/debian/patches/debup_workaround_verify_resolution_GH1533 2017-08-24 20:35:31.000000000 +0000
+++ cython-0.29.14/debian/patches/debup_workaround_verify_resolution_GH1533 2018-11-11 09:16:16.000000000 +0000
@@ -1,6 +1,17 @@
+From: Python Applications Packaging Team
+
+Date: Tue, 24 Jan 2017 20:19:43 +0000
+Subject: debup_workaround_verify_resolution_GH1533
+
+---
+ tests/run/cpdef_enums.pyx | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/tests/run/cpdef_enums.pyx b/tests/run/cpdef_enums.pyx
+index d2e0945..afa6ed5 100644
--- a/tests/run/cpdef_enums.pyx
+++ b/tests/run/cpdef_enums.pyx
-@@ -89,7 +89,7 @@ verify_pure_c()
+@@ -90,7 +90,7 @@ verify_pure_c()
def verify_resolution_GH1533():
"""
diff -Nru cython-0.26.1/debian/patches/disable_tests.patch cython-0.29.14/debian/patches/disable_tests.patch
--- cython-0.26.1/debian/patches/disable_tests.patch 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/debian/patches/disable_tests.patch 2018-11-11 09:16:16.000000000 +0000
@@ -0,0 +1,19 @@
+Descriptions: Disable some tests that are failing on certain architectures.
+ The bugs have been reported upstream, see links in the patch below.
+ numpy_subarray fails on big-endian architectures,
+ numpy_memoryview on all powerpc architectures, arm64 and alpha,
+ apparently due to a bug in numpy.
+Author: Tobias Hansen
+
+--- a/tests/bugs.txt
++++ b/tests/bugs.txt
+@@ -55,3 +55,9 @@
+
+ # Inlined generators
+ inlined_generator_expressions
++
++# Skipped in Debian
++# https://github.com/cython/cython/issues/2308
++numpy_memoryview
++# https://github.com/cython/cython/issues/1982
++numpy_subarray
diff -Nru cython-0.26.1/debian/patches/honour_SOURCE_DATE_EPOCH_for_copyright_year cython-0.29.14/debian/patches/honour_SOURCE_DATE_EPOCH_for_copyright_year
--- cython-0.26.1/debian/patches/honour_SOURCE_DATE_EPOCH_for_copyright_year 2017-08-24 20:35:31.000000000 +0000
+++ cython-0.29.14/debian/patches/honour_SOURCE_DATE_EPOCH_for_copyright_year 2018-11-11 09:16:16.000000000 +0000
@@ -1,16 +1,23 @@
-Description: Honour SOURCE_DATE_EPOCH for copyright year
- Uses SOURCE_DATE_EPOCH environment variable (if set) to
- set the copyright year in documentation, to get reproducible build.
-Author: Alexis Bienvenüe
+From: =?utf-8?q?Alexis_Bienven=C3=BCe?=
+Date: Tue, 2 Apr 2016 03:05:43 +0000
+Subject: Honour SOURCE_DATE_EPOCH for copyright year
---- cython-0.23.4+git4-g7eed8d8.orig/docs/conf.py
-+++ cython-0.23.4+git4-g7eed8d8/docs/conf.py
+Uses SOURCE_DATE_EPOCH environment variable (if set) to
+set the copyright year in documentation, to get reproducible build.
+---
+ docs/conf.py | 5 ++++-
+ 1 file changed, 4 insertions(+), 1 deletion(-)
+
+diff --git a/docs/conf.py b/docs/conf.py
+index 9c83df4..cb8fd98 100644
+--- a/docs/conf.py
++++ b/docs/conf.py
@@ -15,7 +15,10 @@ import sys, os, os.path, re
import itertools
import datetime
-YEAR = datetime.date.today().strftime('%Y')
-+if os.environ.has_key('SOURCE_DATE_EPOCH'):
++if os.environ['SOURCE_DATE_EPOCH']:
+ YEAR = datetime.datetime.utcfromtimestamp(float(os.environ.get('SOURCE_DATE_EPOCH'))).strftime('%Y')
+else:
+ YEAR = datetime.date.today().strftime('%Y')
diff -Nru cython-0.26.1/debian/patches/series cython-0.29.14/debian/patches/series
--- cython-0.26.1/debian/patches/series 2017-08-26 14:53:26.000000000 +0000
+++ cython-0.29.14/debian/patches/series 2018-11-11 09:20:11.000000000 +0000
@@ -2,3 +2,4 @@
deb_disable_googleanalytics
honour_SOURCE_DATE_EPOCH_for_copyright_year
debup_workaround_verify_resolution_GH1533
+disable_tests.patch
diff -Nru cython-0.26.1/debian/patches/squeeze-dsc-patch cython-0.29.14/debian/patches/squeeze-dsc-patch
--- cython-0.26.1/debian/patches/squeeze-dsc-patch 2017-08-24 20:35:31.000000000 +0000
+++ cython-0.29.14/debian/patches/squeeze-dsc-patch 1970-01-01 00:00:00.000000000 +0000
@@ -1,24 +0,0 @@
-From: Yaroslav Halchenko
-Subject: patch to build for squeeze
-
-Last-Update: 2012-12-15
-
-diff --git a/debian/control b/debian/control
-index 7a17b37..3962b52 100644
---- a/debian/control
-+++ b/debian/control
-@@ -10,7 +10,6 @@ Build-Depends: debhelper (>= 7.0.50~),
- python-numpy,
- python3-all-dev,
- python3-all-dbg,
-- python3-numpy
- Standards-Version: 3.9.3
- Homepage: http://cython.org/
- Vcs-Svn: svn://svn.debian.org/svn/python-apps/packages/cython/trunk
-diff --git a/debian/cython3-dbg.install b/debian/cython3-dbg.install
-index c687d67..2369b63 100644
---- a/debian/cython3-dbg.install
-+++ b/debian/cython3-dbg.install
-@@ -1 +1 @@
--usr/lib/python3*/*-packages/*/*/*.cpython-3?d*.so
-+usr/lib/python3*/*-packages/*/*/*_d.so
diff -Nru cython-0.26.1/debian/rules cython-0.29.14/debian/rules
--- cython-0.26.1/debian/rules 2017-09-05 15:46:58.000000000 +0000
+++ cython-0.29.14/debian/rules 2019-10-24 19:58:37.000000000 +0000
@@ -113,12 +113,16 @@
ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS)))
set -e; for P in $(PY2VERS) $(PY3VERS); do \
echo =============== $$P start ===============; \
- PYTHONPATH=`/bin/ls -d $(CURDIR)/build/lib.*-$$P` \
+ if PYTHONPATH=`/bin/ls -d $(CURDIR)/build/lib.*-$$P` \
/usr/bin/python$$P runtests.py $(RUNTESTSOPTS) \
--no-refnanny -v -v \
--exclude="(parallel|Debugger|annotate_html|numpy_test)" \
--work-dir=build/work-dir 2>&1; \
- echo =============== $$P done ===============; \
+ then \
+ echo =============== $$P done ===============; \
+ else \
+ echo "=============== $$P done (FAILURES IGNORED) ==============="; \
+ fi; \
done
endif
diff -Nru cython-0.26.1/debian/tests/control cython-0.29.14/debian/tests/control
--- cython-0.26.1/debian/tests/control 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/debian/tests/control 2018-11-11 09:16:16.000000000 +0000
@@ -0,0 +1,5 @@
+Tests: import2
+Depends: cython, cython-dbg, python-all, python-all-dbg
+
+Tests: import3
+Depends: cython3, cython3-dbg, python3-all, python3-all-dbg
diff -Nru cython-0.26.1/debian/tests/import2 cython-0.29.14/debian/tests/import2
--- cython-0.26.1/debian/tests/import2 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/debian/tests/import2 2018-11-11 09:16:16.000000000 +0000
@@ -0,0 +1,12 @@
+#!/bin/sh
+set -efu
+
+pys="$(pyversions -r 2>/dev/null)"
+
+cd "$AUTOPKGTEST_TMP"
+
+for py in $pys; do
+ echo "=== $py ==="
+ $py -c "import Cython; import Cython.Compiler.Code"
+ ${py}-dbg -c "import Cython; import Cython.Compiler.Code"
+done
diff -Nru cython-0.26.1/debian/tests/import3 cython-0.29.14/debian/tests/import3
--- cython-0.26.1/debian/tests/import3 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/debian/tests/import3 2018-11-11 09:16:16.000000000 +0000
@@ -0,0 +1,11 @@
+#!/bin/sh
+set -efu
+
+pys="$(py3versions -r 2>/dev/null)"
+
+cd "$AUTOPKGTEST_TMP"
+
+for py in $pys; do
+ echo "=== $py ==="
+ $py -c "import Cython; import Cython.Compiler.Code"
+done
diff -Nru cython-0.26.1/debian/watch cython-0.29.14/debian/watch
--- cython-0.26.1/debian/watch 2017-08-24 20:35:31.000000000 +0000
+++ cython-0.29.14/debian/watch 2018-11-11 09:16:16.000000000 +0000
@@ -1,3 +1,3 @@
version=3
opts=uversionmangle=s/.alpha/~alpha/;s/.beta/~beta/;s/.rc/~rc/;s/([0-9])([ab][0-9]*)/$1~$2/g \
- http://pypi.debian.net/Cython/Cython-(.+)\.(?:zip|tgz|tbz|txz|(?:tar\.(?:gz|bz2|xz)))
+ https://pypi.debian.net/Cython/Cython-(.+)\.(?:zip|tgz|tbz|txz|(?:tar\.(?:gz|bz2|xz)))
diff -Nru cython-0.26.1/Demos/benchmarks/bpnn3.py cython-0.29.14/Demos/benchmarks/bpnn3.py
--- cython-0.26.1/Demos/benchmarks/bpnn3.py 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/Demos/benchmarks/bpnn3.py 2018-09-22 14:18:56.000000000 +0000
@@ -41,7 +41,7 @@
# create weights
self.wi = makeMatrix(self.ni, self.nh)
self.wo = makeMatrix(self.nh, self.no)
- # set them to random vaules
+ # set them to random values
for i in range(self.ni):
for j in range(self.nh):
self.wi[i][j] = rand(-2.0, 2.0)
diff -Nru cython-0.26.1/Demos/callback/README.rst cython-0.29.14/Demos/callback/README.rst
--- cython-0.26.1/Demos/callback/README.rst 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/Demos/callback/README.rst 2018-09-22 14:18:56.000000000 +0000
@@ -0,0 +1,12 @@
+This example demonstrates how you can wrap a C API
+that has a callback interface, so that you can
+pass Python functions to it as callbacks.
+
+The files ``cheesefinder.h`` and ``cheesefinder.c``
+represent the C library to be wrapped.
+
+The file ``cheese.pyx`` is the Cython module
+which wraps it.
+
+The file ``run_cheese.py`` demonstrates how to
+call the wrapper.
diff -Nru cython-0.26.1/Demos/callback/README.txt cython-0.29.14/Demos/callback/README.txt
--- cython-0.26.1/Demos/callback/README.txt 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/Demos/callback/README.txt 1970-01-01 00:00:00.000000000 +0000
@@ -1,12 +0,0 @@
-This example demonstrates how you can wrap a C API
-that has a callback interface, so that you can
-pass Python functions to it as callbacks.
-
-The files cheesefinder.h and cheesefinder.c
-represent the C library to be wrapped.
-
-The file cheese.pyx is the Pyrex module
-which wraps it.
-
-The file run_cheese.py demonstrates how to
-call the wrapper.
diff -Nru cython-0.26.1/Demos/embed/Makefile cython-0.29.14/Demos/embed/Makefile
--- cython-0.26.1/Demos/embed/Makefile 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Demos/embed/Makefile 2018-09-22 14:18:56.000000000 +0000
@@ -1,6 +1,7 @@
# Makefile for creating our standalone Cython program
PYTHON := python
PYVERSION := $(shell $(PYTHON) -c "import sys; print(sys.version[:3])")
+PYPREFIX := $(shell $(PYTHON) -c "import sys; print(sys.prefix)")
INCDIR := $(shell $(PYTHON) -c "from distutils import sysconfig; print(sysconfig.get_python_inc())")
PLATINCDIR := $(shell $(PYTHON) -c "from distutils import sysconfig; print(sysconfig.get_python_inc(plat_specific=True))")
@@ -31,5 +32,5 @@
@rm -f *~ *.o *.so core core.* *.c embedded test.output
test: clean all
- LD_LIBRARY_PATH=$(LIBDIR1):$$LD_LIBRARY_PATH ./embedded > test.output
+ PYTHONHOME=$(PYPREFIX) LD_LIBRARY_PATH=$(LIBDIR1):$$LD_LIBRARY_PATH ./embedded > test.output
$(PYTHON) assert_equal.py embedded.output test.output
diff -Nru cython-0.26.1/Demos/embed/README cython-0.29.14/Demos/embed/README
--- cython-0.26.1/Demos/embed/README 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Demos/embed/README 1970-01-01 00:00:00.000000000 +0000
@@ -1,5 +0,0 @@
-This example demonstrates how Cython-generated code
-can be called directly from a main program written in C.
-
-The Windows makefiles were contributed by
-Duncan Booth .
diff -Nru cython-0.26.1/Demos/embed/README.rst cython-0.29.14/Demos/embed/README.rst
--- cython-0.26.1/Demos/embed/README.rst 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/Demos/embed/README.rst 2018-09-22 14:18:56.000000000 +0000
@@ -0,0 +1,5 @@
+This example demonstrates how Cython-generated code
+can be called directly from a main program written in C.
+
+The Windows makefiles were contributed by
+Duncan Booth: Duncan.Booth@SuttonCourtenay.org.uk.
diff -Nru cython-0.26.1/Demos/freeze/README.rst cython-0.29.14/Demos/freeze/README.rst
--- cython-0.26.1/Demos/freeze/README.rst 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/Demos/freeze/README.rst 2018-09-22 14:18:56.000000000 +0000
@@ -0,0 +1,111 @@
+NAME
+====
+
+**cython_freeze** - create a C file for embedding Cython modules
+
+
+SYNOPSIS
+========
+::
+
+ cython_freeze [-o outfile] [-p] module [...]
+
+
+DESCRIPTION
+===========
+
+**cython_freeze** generates a C source file to embed a Python interpreter
+with one or more Cython modules built in. This allows one to create a single
+executable from Cython code, without having to have separate shared objects
+for each Cython module. A major advantage of this approach is that it allows
+debugging with gprof(1), which does not work with shared objects.
+
+Unless ``-p`` is given, the first module's ``__name__`` is set to
+``"__main__"`` and is imported on startup; if ``-p`` is given, a normal Python
+interpreter is built, with the given modules built into the binary.
+
+Note that this method differs from ``cython --embed``. The ``--embed`` options
+modifies the resulting C source file to include a ``main()`` function, so it
+can only be used on a single Cython module. The advantage ``--embed`` is
+simplicity. This module, on the other hand, can be used with multiple
+modules, but it requires another C source file to be created.
+
+
+OPTIONS
+=======
+::
+
+ -o FILE, --outfile=FILE write output to FILE instead of standard output
+ -p, --pymain do not automatically run the first module as __main__
+
+
+EXAMPLE
+=======
+
+In the ``Demos/freeze`` directory, there exist two Cython modules:
+
+* ``lcmath.pyx``: A module that interfaces with the -lm library.
+
+* ``combinatorics.pyx``: A module that implements n-choose-r using lcmath.
+
+Both modules have the Python idiom ``if __name__ == "__main__"``, which only
+execute if that module is the "main" module. If run as main, lcmath prints the
+factorial of the argument, while combinatorics prints n-choose-r.
+
+The provided Makefile creates an executable, *nCr*, using combinatorics as the
+"main" module. It basically performs the following (ignoring the compiler
+flags)::
+
+ $ cython_freeze combinatorics lcmath > nCr.c
+ $ cython combinatorics.pyx
+ $ cython lcmath.pyx
+ $ gcc -c nCr.c
+ $ gcc -c combinatorics.c
+ $ gcc -c lcmath.c
+ $ gcc nCr.o combinatorics.o lcmath.o -o nCr
+
+Because the combinatorics module was listed first, its ``__name__`` is set
+to ``"__main__"``, while lcmath's is set to ``"lcmath"``. The executable now
+contains a Python interpreter and both Cython modules. ::
+
+ $ ./nCr
+ USAGE: ./nCr n r
+ Prints n-choose-r.
+ $ ./nCr 15812351235 12
+ 5.10028093999e+113
+
+You may wish to build a normal Python interpreter, rather than having one
+module as "main". This may happen if you want to use your module from an
+interactive shell or from another script, yet you still want it statically
+linked so you can profile it with gprof. To do this, add the ``--pymain``
+flag to ``cython_freeze``. In the Makefile, the *python* executable is built
+like this. ::
+
+ $ cython_freeze --pymain combinatorics lcmath -o python.c
+ $ gcc -c python.c
+ $ gcc python.o combinatorics.o lcmath.o -o python
+
+Now ``python`` is a normal Python interpreter, but the lcmath and combinatorics
+modules will be built into the executable. ::
+
+ $ ./python
+ Python 2.6.2 (release26-maint, Apr 19 2009, 01:58:18)
+ [GCC 4.3.3] on linux2
+ Type "help", "copyright", "credits" or "license" for more information.
+ >>> import lcmath
+ >>> lcmath.factorial(155)
+ 4.7891429014634364e+273
+
+
+PREREQUISITES
+=============
+
+Cython 0.11.2 (or newer, assuming the API does not change)
+
+
+SEE ALSO
+========
+
+* `Python `_
+* `Cython `_
+* `freeze.py `_
diff -Nru cython-0.26.1/Demos/freeze/README.txt cython-0.29.14/Demos/freeze/README.txt
--- cython-0.26.1/Demos/freeze/README.txt 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/Demos/freeze/README.txt 1970-01-01 00:00:00.000000000 +0000
@@ -1,111 +0,0 @@
-NAME
-====
-
-cython_freeze - create a C file for embedding Cython modules
-
-
-SYNOPSIS
-========
-
-cython_freeze [-o outfile] [-p] module [...]
-
-
-DESCRIPTION
-===========
-
-**cython_freeze** generates a C source file to embed a Python interpreter
-with one or more Cython modules built in. This allows one to create a single
-executable from Cython code, without having to have separate shared objects
-for each Cython module. A major advantage of this approach is that it allows
-debuging with gprof(1), which does not work with shared objects.
-
-Unless ``-p`` is given, the first module's ``__name__`` is set to
-``"__main__"`` and is imported on startup; if ``-p`` is given, a normal Python
-interpreter is built, with the given modules built into the binary.
-
-Note that this method differs from ``cython --embed``. The ``--embed`` options
-modifies the resulting C source file to include a ``main()`` function, so it
-can only be used on a single Cython module. The advantage ``--embed`` is
-simplicity. This module, on the other hand, can be used with multiple
-modules, but it requires another C source file to be created.
-
-
-OPTIONS
-=======
-
--o FILE, --outfile=FILE write output to FILE instead of standard output
--p, --pymain do not automatically run the first module as __main__
-
-
-EXAMPLE
-=======
-
-In the Demos/freeze directory, there exist two Cython modules:
-
-lcmath.pyx
- A module that interfaces with the -lm library.
-
-combinatorics.pyx
- A module that implements n-choose-r using lcmath.
-
-Both modules have the Python idiom ``if __name__ == "__main__"``, which only
-execute if that module is the "main" module. If run as main, lcmath prints the
-factorial of the argument, while combinatorics prints n-choose-r.
-
-The provided Makefile creates an executable, *nCr*, using combinatorics as the
-"main" module. It basically performs the following (ignoring the compiler
-flags)::
-
- $ cython_freeze combinatorics lcmath > nCr.c
- $ cython combinatorics.pyx
- $ cython lcmath.pyx
- $ gcc -c nCr.c
- $ gcc -c combinatorics.c
- $ gcc -c lcmath.c
- $ gcc nCr.o combinatorics.o lcmath.o -o nCr
-
-Because the combinatorics module was listed first, its ``__name__`` is set
-to ``"__main__"``, while lcmath's is set to ``"lcmath"``. The executable now
-contains a Python interpreter and both Cython modules. ::
-
- $ ./nCr
- USAGE: ./nCr n r
- Prints n-choose-r.
- $ ./nCr 15812351235 12
- 5.10028093999e+113
-
-You may wish to build a normal Python interpreter, rather than having one
-module as "main". This may happen if you want to use your module from an
-interactive shell or from another script, yet you still want it statically
-linked so you can profile it with gprof. To do this, add the ``--pymain``
-flag to ``cython_freeze``. In the Makefile, the *python* executable is built
-like this. ::
-
- $ cython_freeze --pymain combinatorics lcmath -o python.c
- $ gcc -c python.c
- $ gcc python.o combinatorics.o lcmath.o -o python
-
-Now ``python`` is a normal Python interpreter, but the lcmath and combinatorics
-modules will be built into the executable. ::
-
- $ ./python
- Python 2.6.2 (release26-maint, Apr 19 2009, 01:58:18)
- [GCC 4.3.3] on linux2
- Type "help", "copyright", "credits" or "license" for more information.
- >>> import lcmath
- >>> lcmath.factorial(155)
- 4.7891429014634364e+273
-
-
-PREREQUISITES
-=============
-
-Cython 0.11.2 (or newer, assuming the API does not change)
-
-
-SEE ALSO
-========
-
-* `Python `_
-* `Cython `_
-* `freeze.py `_
diff -Nru cython-0.26.1/Demos/overflow_perf_run.py cython-0.29.14/Demos/overflow_perf_run.py
--- cython-0.26.1/Demos/overflow_perf_run.py 2015-09-10 16:25:36.000000000 +0000
+++ cython-0.29.14/Demos/overflow_perf_run.py 2018-11-24 09:20:06.000000000 +0000
@@ -16,7 +16,7 @@
print(func.__name__)
for type in ['int', 'unsigned int', 'long long', 'unsigned long long', 'object']:
if func == most_orthogonal:
- if type == 'object' or np == None:
+ if type == 'object' or np is None:
continue
type_map = {'int': 'int32', 'unsigned int': 'uint32', 'long long': 'int64', 'unsigned long long': 'uint64'}
shape = N, 3
diff -Nru cython-0.26.1/docs/conf.py cython-0.29.14/docs/conf.py
--- cython-0.26.1/docs/conf.py 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/docs/conf.py 2018-11-24 09:20:06.000000000 +0000
@@ -20,7 +20,7 @@
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
-#sys.path.insert(0, os.path.abspath('.'))
+sys.path.insert(0, os.path.abspath('..'))
sys.path.append(os.path.abspath('sphinxext'))
# Import support for ipython console session syntax highlighting (lives
@@ -43,7 +43,8 @@
'cython_highlighting',
'sphinx.ext.pngmath',
'sphinx.ext.todo',
- 'sphinx.ext.intersphinx'
+ 'sphinx.ext.intersphinx',
+ 'sphinx.ext.autodoc'
]
try: import rst2pdf
@@ -126,7 +127,7 @@
todo_include_todos = True
# intersphinx for standard :keyword:s (def, for, etc.)
-intersphinx_mapping = {'python': ('http://docs.python.org/3/', None)}
+intersphinx_mapping = {'python': ('https://docs.python.org/3/', None)}
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
diff -Nru cython-0.26.1/docs/CONTRIBUTING.rst cython-0.29.14/docs/CONTRIBUTING.rst
--- cython-0.26.1/docs/CONTRIBUTING.rst 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/CONTRIBUTING.rst 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,17 @@
+Welcome, and thank you for your interest in contributing!
+=========================================================
+
+If you are looking for a good way to contribute to the Cython project, please
+
+* have a look at the `Cython Hacker Guide `_,
+ especially the section on `getting started `_.
+* look through the `issues that need help `_.
+* look through the `issues that are a good entry point for beginners `_.
+* ask on the `core developers mailing list `_ for guidance.
+
+If you have code that you want to contribute, please make sure that it
+
+* includes tests in the `tests/` directory (see the `Hacker Guide on Testing `_)
+* comes in form of a pull request
+
+We use `travis `_ and `appveyor `_ for cross-platform testing, including pull requests.
diff -Nru "/tmp/tmp0lrW9P/aTeTJbw7H9/cython-0.26.1/docs/examples/Cython Magics.ipynb" "/tmp/tmp0lrW9P/hdCxpT7ujz/cython-0.29.14/docs/examples/Cython Magics.ipynb"
--- "/tmp/tmp0lrW9P/aTeTJbw7H9/cython-0.26.1/docs/examples/Cython Magics.ipynb" 2015-06-22 12:53:11.000000000 +0000
+++ "/tmp/tmp0lrW9P/hdCxpT7ujz/cython-0.29.14/docs/examples/Cython Magics.ipynb" 2018-09-22 14:18:56.000000000 +0000
@@ -1,366 +1,366 @@
-{
- "metadata": {
- "name": "Cython Magics",
- "signature": "sha256:c357b93e9480d6347c6677862bf43750745cef4b30129c5bc53cb879a19d4074"
- },
- "nbformat": 3,
- "nbformat_minor": 0,
- "worksheets": [
- {
- "cells": [
- {
- "cell_type": "heading",
- "level": 1,
- "metadata": {},
- "source": [
- "Cython Magic Functions"
- ]
- },
- {
- "cell_type": "heading",
- "level": 2,
- "metadata": {},
- "source": [
- "Loading the extension"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Cython has an IPython extension that contains a number of magic functions for working with Cython code. This extension can be loaded using the `%load_ext` magic as follows:"
- ]
- },
- {
- "cell_type": "code",
- "collapsed": false,
- "input": [
- "%load_ext cython"
- ],
- "language": "python",
- "metadata": {},
- "outputs": [],
- "prompt_number": 1
- },
- {
- "cell_type": "heading",
- "level": 2,
- "metadata": {},
- "source": [
- "The %cython_inline magic"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The `%%cython_inline` magic uses `Cython.inline` to compile a Cython expression. This allows you to enter and run a function body with Cython code. Use a bare `return` statement to return values. "
- ]
- },
- {
- "cell_type": "code",
- "collapsed": false,
- "input": [
- "a = 10\n",
- "b = 20"
- ],
- "language": "python",
- "metadata": {},
- "outputs": [],
- "prompt_number": 2
- },
- {
- "cell_type": "code",
- "collapsed": false,
- "input": [
- "%%cython_inline\n",
- "return a+b"
- ],
- "language": "python",
- "metadata": {},
- "outputs": [
- {
- "metadata": {},
- "output_type": "pyout",
- "prompt_number": 3,
- "text": [
- "30"
- ]
- }
- ],
- "prompt_number": 3
- },
- {
- "cell_type": "heading",
- "level": 2,
- "metadata": {},
- "source": [
- "The %cython_pyximport magic"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The `%%cython_pyximport` magic allows you to enter arbitrary Cython code into a cell. That Cython code is written as a `.pyx` file in the current working directory and then imported using `pyximport`. You have the specify the name of the module that the Code will appear in. All symbols from the module are imported automatically by the magic function."
- ]
- },
- {
- "cell_type": "code",
- "collapsed": false,
- "input": [
- "%%cython_pyximport foo\n",
- "def f(x):\n",
- " return 4.0*x"
- ],
- "language": "python",
- "metadata": {},
- "outputs": [],
- "prompt_number": 4
- },
- {
- "cell_type": "code",
- "collapsed": false,
- "input": [
- "f(10)"
- ],
- "language": "python",
- "metadata": {},
- "outputs": [
- {
- "metadata": {},
- "output_type": "pyout",
- "prompt_number": 5,
- "text": [
- "40.0"
- ]
- }
- ],
- "prompt_number": 5
- },
- {
- "cell_type": "heading",
- "level": 2,
- "metadata": {},
- "source": [
- "The %cython magic"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Probably the most important magic is the `%cython` magic. This is similar to the `%%cython_pyximport` magic, but doesn't require you to specify a module name. Instead, the `%%cython` magic uses manages everything using temporary files in the `~/.cython/magic` directory. All of the symbols in the Cython module are imported automatically by the magic.\n",
- "\n",
- "Here is a simple example of a Black-Scholes options pricing algorithm written in Cython. Please note that this example might not compile on non-POSIX systems (e.g., Windows) because of a missing `erf` symbol."
- ]
- },
- {
- "cell_type": "code",
- "collapsed": false,
- "input": [
- "%%cython\n",
- "cimport cython\n",
- "from libc.math cimport exp, sqrt, pow, log, erf\n",
- "\n",
- "@cython.cdivision(True)\n",
- "cdef double std_norm_cdf_cy(double x) nogil:\n",
- " return 0.5*(1+erf(x/sqrt(2.0)))\n",
- "\n",
- "@cython.cdivision(True)\n",
- "def black_scholes_cy(double s, double k, double t, double v,\n",
- " double rf, double div, double cp):\n",
- " \"\"\"Price an option using the Black-Scholes model.\n",
- " \n",
- " s : initial stock price\n",
- " k : strike price\n",
- " t : expiration time\n",
- " v : volatility\n",
- " rf : risk-free rate\n",
- " div : dividend\n",
- " cp : +1/-1 for call/put\n",
- " \"\"\"\n",
- " cdef double d1, d2, optprice\n",
- " with nogil:\n",
- " d1 = (log(s/k)+(rf-div+0.5*pow(v,2))*t)/(v*sqrt(t))\n",
- " d2 = d1 - v*sqrt(t)\n",
- " optprice = cp*s*exp(-div*t)*std_norm_cdf_cy(cp*d1) - \\\n",
- " cp*k*exp(-rf*t)*std_norm_cdf_cy(cp*d2)\n",
- " return optprice"
- ],
- "language": "python",
- "metadata": {},
- "outputs": [],
- "prompt_number": 6
- },
- {
- "cell_type": "code",
- "collapsed": false,
- "input": [
- "black_scholes_cy(100.0, 100.0, 1.0, 0.3, 0.03, 0.0, -1)"
- ],
- "language": "python",
- "metadata": {},
- "outputs": [
- {
- "metadata": {},
- "output_type": "pyout",
- "prompt_number": 7,
- "text": [
- "10.327861752731728"
- ]
- }
- ],
- "prompt_number": 7
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "For comparison, the same code is implemented here in pure python."
- ]
- },
- {
- "cell_type": "code",
- "collapsed": false,
- "input": [
- "from math import exp, sqrt, pow, log, erf\n",
- "\n",
- "def std_norm_cdf_py(x):\n",
- " return 0.5*(1+erf(x/sqrt(2.0)))\n",
- "\n",
- "def black_scholes_py(s, k, t, v, rf, div, cp):\n",
- " \"\"\"Price an option using the Black-Scholes model.\n",
- " \n",
- " s : initial stock price\n",
- " k : strike price\n",
- " t : expiration time\n",
- " v : volatility\n",
- " rf : risk-free rate\n",
- " div : dividend\n",
- " cp : +1/-1 for call/put\n",
- " \"\"\"\n",
- " d1 = (log(s/k)+(rf-div+0.5*pow(v,2))*t)/(v*sqrt(t))\n",
- " d2 = d1 - v*sqrt(t)\n",
- " optprice = cp*s*exp(-div*t)*std_norm_cdf_py(cp*d1) - \\\n",
- " cp*k*exp(-rf*t)*std_norm_cdf_py(cp*d2)\n",
- " return optprice"
- ],
- "language": "python",
- "metadata": {},
- "outputs": [],
- "prompt_number": 8
- },
- {
- "cell_type": "code",
- "collapsed": false,
- "input": [
- "black_scholes_py(100.0, 100.0, 1.0, 0.3, 0.03, 0.0, -1)"
- ],
- "language": "python",
- "metadata": {},
- "outputs": [
- {
- "metadata": {},
- "output_type": "pyout",
- "prompt_number": 9,
- "text": [
- "10.327861752731728"
- ]
- }
- ],
- "prompt_number": 9
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Below we see the runtime of the two functions: the Cython version is nearly a factor of 10 faster."
- ]
- },
- {
- "cell_type": "code",
- "collapsed": false,
- "input": [
- "%timeit black_scholes_cy(100.0, 100.0, 1.0, 0.3, 0.03, 0.0, -1)"
- ],
- "language": "python",
- "metadata": {},
- "outputs": [
- {
- "output_type": "stream",
- "stream": "stdout",
- "text": [
- "1000000 loops, best of 3: 319 ns per loop\n"
- ]
- }
- ],
- "prompt_number": 10
- },
- {
- "cell_type": "code",
- "collapsed": false,
- "input": [
- "%timeit black_scholes_py(100.0, 100.0, 1.0, 0.3, 0.03, 0.0, -1)"
- ],
- "language": "python",
- "metadata": {},
- "outputs": [
- {
- "output_type": "stream",
- "stream": "stdout",
- "text": [
- "100000 loops, best of 3: 2.28 \u00b5s per loop\n"
- ]
- }
- ],
- "prompt_number": 11
- },
- {
- "cell_type": "heading",
- "level": 2,
- "metadata": {},
- "source": [
- "External libraries"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Cython allows you to specify additional libraries to be linked with your extension, you can do so with the `-l` flag (also spelled `--lib`). Note that this flag can be passed more than once to specify multiple libraries, such as `-lm -llib2 --lib lib3`. Here's a simple example of how to access the system math library:"
- ]
- },
- {
- "cell_type": "code",
- "collapsed": false,
- "input": [
- "%%cython -lm\n",
- "from libc.math cimport sin\n",
- "print 'sin(1)=', sin(1)"
- ],
- "language": "python",
- "metadata": {},
- "outputs": [
- {
- "output_type": "stream",
- "stream": "stdout",
- "text": [
- "sin(1)= 0.841470984808\n"
- ]
- }
- ],
- "prompt_number": 12
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "You can similarly use the `-I/--include` flag to add include directories to the search path, and `-c/--compile-args` to add extra flags that are passed to Cython via the `extra_compile_args` of the distutils `Extension` class. Please see [the Cython docs on C library usage](http://docs.cython.org/src/tutorial/clibraries.html) for more details on the use of these flags."
- ]
- }
- ],
- "metadata": {}
- }
- ]
-}
+{
+ "metadata": {
+ "name": "Cython Magics",
+ "signature": "sha256:c357b93e9480d6347c6677862bf43750745cef4b30129c5bc53cb879a19d4074"
+ },
+ "nbformat": 3,
+ "nbformat_minor": 0,
+ "worksheets": [
+ {
+ "cells": [
+ {
+ "cell_type": "heading",
+ "level": 1,
+ "metadata": {},
+ "source": [
+ "Cython Magic Functions"
+ ]
+ },
+ {
+ "cell_type": "heading",
+ "level": 2,
+ "metadata": {},
+ "source": [
+ "Loading the extension"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Cython has an IPython extension that contains a number of magic functions for working with Cython code. This extension can be loaded using the `%load_ext` magic as follows:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "%load_ext cython"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [],
+ "prompt_number": 1
+ },
+ {
+ "cell_type": "heading",
+ "level": 2,
+ "metadata": {},
+ "source": [
+ "The %cython_inline magic"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The `%%cython_inline` magic uses `Cython.inline` to compile a Cython expression. This allows you to enter and run a function body with Cython code. Use a bare `return` statement to return values. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "a = 10\n",
+ "b = 20"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [],
+ "prompt_number": 2
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "%%cython_inline\n",
+ "return a+b"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "metadata": {},
+ "output_type": "pyout",
+ "prompt_number": 3,
+ "text": [
+ "30"
+ ]
+ }
+ ],
+ "prompt_number": 3
+ },
+ {
+ "cell_type": "heading",
+ "level": 2,
+ "metadata": {},
+ "source": [
+ "The %cython_pyximport magic"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The `%%cython_pyximport` magic allows you to enter arbitrary Cython code into a cell. That Cython code is written as a `.pyx` file in the current working directory and then imported using `pyximport`. You have the specify the name of the module that the Code will appear in. All symbols from the module are imported automatically by the magic function."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "%%cython_pyximport foo\n",
+ "def f(x):\n",
+ " return 4.0*x"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [],
+ "prompt_number": 4
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "f(10)"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "metadata": {},
+ "output_type": "pyout",
+ "prompt_number": 5,
+ "text": [
+ "40.0"
+ ]
+ }
+ ],
+ "prompt_number": 5
+ },
+ {
+ "cell_type": "heading",
+ "level": 2,
+ "metadata": {},
+ "source": [
+ "The %cython magic"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Probably the most important magic is the `%cython` magic. This is similar to the `%%cython_pyximport` magic, but doesn't require you to specify a module name. Instead, the `%%cython` magic uses manages everything using temporary files in the `~/.cython/magic` directory. All of the symbols in the Cython module are imported automatically by the magic.\n",
+ "\n",
+ "Here is a simple example of a Black-Scholes options pricing algorithm written in Cython. Please note that this example might not compile on non-POSIX systems (e.g., Windows) because of a missing `erf` symbol."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "%%cython\n",
+ "cimport cython\n",
+ "from libc.math cimport exp, sqrt, pow, log, erf\n",
+ "\n",
+ "@cython.cdivision(True)\n",
+ "cdef double std_norm_cdf_cy(double x) nogil:\n",
+ " return 0.5*(1+erf(x/sqrt(2.0)))\n",
+ "\n",
+ "@cython.cdivision(True)\n",
+ "def black_scholes_cy(double s, double k, double t, double v,\n",
+ " double rf, double div, double cp):\n",
+ " \"\"\"Price an option using the Black-Scholes model.\n",
+ " \n",
+ " s : initial stock price\n",
+ " k : strike price\n",
+ " t : expiration time\n",
+ " v : volatility\n",
+ " rf : risk-free rate\n",
+ " div : dividend\n",
+ " cp : +1/-1 for call/put\n",
+ " \"\"\"\n",
+ " cdef double d1, d2, optprice\n",
+ " with nogil:\n",
+ " d1 = (log(s/k)+(rf-div+0.5*pow(v,2))*t)/(v*sqrt(t))\n",
+ " d2 = d1 - v*sqrt(t)\n",
+ " optprice = cp*s*exp(-div*t)*std_norm_cdf_cy(cp*d1) - \\\n",
+ " cp*k*exp(-rf*t)*std_norm_cdf_cy(cp*d2)\n",
+ " return optprice"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [],
+ "prompt_number": 6
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "black_scholes_cy(100.0, 100.0, 1.0, 0.3, 0.03, 0.0, -1)"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "metadata": {},
+ "output_type": "pyout",
+ "prompt_number": 7,
+ "text": [
+ "10.327861752731728"
+ ]
+ }
+ ],
+ "prompt_number": 7
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "For comparison, the same code is implemented here in pure python."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "from math import exp, sqrt, pow, log, erf\n",
+ "\n",
+ "def std_norm_cdf_py(x):\n",
+ " return 0.5*(1+erf(x/sqrt(2.0)))\n",
+ "\n",
+ "def black_scholes_py(s, k, t, v, rf, div, cp):\n",
+ " \"\"\"Price an option using the Black-Scholes model.\n",
+ " \n",
+ " s : initial stock price\n",
+ " k : strike price\n",
+ " t : expiration time\n",
+ " v : volatility\n",
+ " rf : risk-free rate\n",
+ " div : dividend\n",
+ " cp : +1/-1 for call/put\n",
+ " \"\"\"\n",
+ " d1 = (log(s/k)+(rf-div+0.5*pow(v,2))*t)/(v*sqrt(t))\n",
+ " d2 = d1 - v*sqrt(t)\n",
+ " optprice = cp*s*exp(-div*t)*std_norm_cdf_py(cp*d1) - \\\n",
+ " cp*k*exp(-rf*t)*std_norm_cdf_py(cp*d2)\n",
+ " return optprice"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [],
+ "prompt_number": 8
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "black_scholes_py(100.0, 100.0, 1.0, 0.3, 0.03, 0.0, -1)"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "metadata": {},
+ "output_type": "pyout",
+ "prompt_number": 9,
+ "text": [
+ "10.327861752731728"
+ ]
+ }
+ ],
+ "prompt_number": 9
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Below we see the runtime of the two functions: the Cython version is nearly a factor of 10 faster."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "%timeit black_scholes_cy(100.0, 100.0, 1.0, 0.3, 0.03, 0.0, -1)"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "1000000 loops, best of 3: 319 ns per loop\n"
+ ]
+ }
+ ],
+ "prompt_number": 10
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "%timeit black_scholes_py(100.0, 100.0, 1.0, 0.3, 0.03, 0.0, -1)"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "100000 loops, best of 3: 2.28 \u00b5s per loop\n"
+ ]
+ }
+ ],
+ "prompt_number": 11
+ },
+ {
+ "cell_type": "heading",
+ "level": 2,
+ "metadata": {},
+ "source": [
+ "External libraries"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Cython allows you to specify additional libraries to be linked with your extension, you can do so with the `-l` flag (also spelled `--lib`). Note that this flag can be passed more than once to specify multiple libraries, such as `-lm -llib2 --lib lib3`. Here's a simple example of how to access the system math library:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "collapsed": false,
+ "input": [
+ "%%cython -lm\n",
+ "from libc.math cimport sin\n",
+ "print 'sin(1)=', sin(1)"
+ ],
+ "language": "python",
+ "metadata": {},
+ "outputs": [
+ {
+ "output_type": "stream",
+ "stream": "stdout",
+ "text": [
+ "sin(1)= 0.841470984808\n"
+ ]
+ }
+ ],
+ "prompt_number": 12
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "You can similarly use the `-I/--include` flag to add include directories to the search path, and `-c/--compile-args` to add extra flags that are passed to Cython via the `extra_compile_args` of the distutils `Extension` class. Please see [the Cython docs on C library usage](http://docs.cython.org/src/tutorial/clibraries.html) for more details on the use of these flags."
+ ]
+ }
+ ],
+ "metadata": {}
+ }
+ ]
+}
diff -Nru cython-0.26.1/docs/examples/not_in_docs/great_circle/c1.pyx cython-0.29.14/docs/examples/not_in_docs/great_circle/c1.pyx
--- cython-0.26.1/docs/examples/not_in_docs/great_circle/c1.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/not_in_docs/great_circle/c1.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,12 @@
+import math
+
+def great_circle(lon1, lat1, lon2, lat2):
+ radius = 3956 # miles
+ x = math.pi/180.0
+
+ a = (90.0 - lat1)*x
+ b = (90.0 - lat2)*x
+ theta = (lon2 - lon1)*x
+ c = math.acos(math.cos(a)*math.cos(b) + math.sin(a)*math.sin(b)*math.cos(theta))
+
+ return radius*c
diff -Nru cython-0.26.1/docs/examples/not_in_docs/great_circle/c2.pyx cython-0.29.14/docs/examples/not_in_docs/great_circle/c2.pyx
--- cython-0.26.1/docs/examples/not_in_docs/great_circle/c2.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/not_in_docs/great_circle/c2.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,13 @@
+import math
+
+def great_circle(double lon1, double lat1, double lon2, double lat2):
+ cdef double radius = 3956 # miles
+ cdef double x = math.pi/180.0
+ cdef double a, b, theta, c
+
+ a = (90.0 - lat1)*x
+ b = (90.0 - lat2)*x
+ theta = (lon2 - lon1)*x
+ c = math.acos(math.cos(a)*math.cos(b) + math.sin(a)*math.sin(b)*math.cos(theta))
+
+ return radius*c
diff -Nru cython-0.26.1/docs/examples/not_in_docs/great_circle/p1.py cython-0.29.14/docs/examples/not_in_docs/great_circle/p1.py
--- cython-0.26.1/docs/examples/not_in_docs/great_circle/p1.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/not_in_docs/great_circle/p1.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,12 @@
+import math
+
+def great_circle(lon1, lat1, lon2, lat2):
+ radius = 3956 # miles
+ x = math.pi/180.0
+
+ a = (90.0 - lat1)*x
+ b = (90.0 - lat2)*x
+ theta = (lon2 - lon1)*x
+ c = math.acos(math.cos(a)*math.cos(b) + math.sin(a)*math.sin(b)*math.cos(theta))
+
+ return radius*c
diff -Nru cython-0.26.1/docs/examples/quickstart/build/hello.pyx cython-0.29.14/docs/examples/quickstart/build/hello.pyx
--- cython-0.26.1/docs/examples/quickstart/build/hello.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/quickstart/build/hello.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,2 @@
+def say_hello_to(name):
+ print("Hello %s!" % name)
diff -Nru cython-0.26.1/docs/examples/quickstart/build/setup.py cython-0.29.14/docs/examples/quickstart/build/setup.py
--- cython-0.26.1/docs/examples/quickstart/build/setup.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/quickstart/build/setup.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,5 @@
+from distutils.core import setup
+from Cython.Build import cythonize
+
+setup(name='Hello world app',
+ ext_modules=cythonize("hello.pyx"))
diff -Nru cython-0.26.1/docs/examples/quickstart/cythonize/cdef_keyword.pyx cython-0.29.14/docs/examples/quickstart/cythonize/cdef_keyword.pyx
--- cython-0.26.1/docs/examples/quickstart/cythonize/cdef_keyword.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/quickstart/cythonize/cdef_keyword.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,2 @@
+cdef double f(double x) except? -2:
+ return x ** 2 - x
diff -Nru cython-0.26.1/docs/examples/quickstart/cythonize/integrate_cy.pyx cython-0.29.14/docs/examples/quickstart/cythonize/integrate_cy.pyx
--- cython-0.26.1/docs/examples/quickstart/cythonize/integrate_cy.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/quickstart/cythonize/integrate_cy.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,12 @@
+def f(double x):
+ return x ** 2 - x
+
+
+def integrate_f(double a, double b, int N):
+ cdef int i
+ cdef double s, dx
+ s = 0
+ dx = (b - a) / N
+ for i in range(N):
+ s += f(a + i * dx)
+ return s * dx
diff -Nru cython-0.26.1/docs/examples/quickstart/cythonize/integrate.py cython-0.29.14/docs/examples/quickstart/cythonize/integrate.py
--- cython-0.26.1/docs/examples/quickstart/cythonize/integrate.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/quickstart/cythonize/integrate.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,10 @@
+def f(x):
+ return x ** 2 - x
+
+
+def integrate_f(a, b, N):
+ s = 0
+ dx = (b - a) / N
+ for i in range(N):
+ s += f(a + i * dx)
+ return s * dx
diff -Nru cython-0.26.1/docs/examples/README.rst cython-0.29.14/docs/examples/README.rst
--- cython-0.26.1/docs/examples/README.rst 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/README.rst 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,3 @@
+This example directory is organized like the ``Cython/docs/src/`` directory,
+with one directory per ``.rst`` file. All files in this directory are tested
+in the :file:`runtests.py` with the mode `compile`.
diff -Nru cython-0.26.1/docs/examples/tutorial/array/clone.pyx cython-0.29.14/docs/examples/tutorial/array/clone.pyx
--- cython-0.26.1/docs/examples/tutorial/array/clone.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/array/clone.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,8 @@
+from cpython cimport array
+import array
+
+cdef array.array int_array_template = array.array('i', [])
+cdef array.array newarray
+
+# create an array with 3 elements with same type as template
+newarray = array.clone(int_array_template, 3, zero=False)
diff -Nru cython-0.26.1/docs/examples/tutorial/array/overhead.pyx cython-0.29.14/docs/examples/tutorial/array/overhead.pyx
--- cython-0.26.1/docs/examples/tutorial/array/overhead.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/array/overhead.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,15 @@
+from cpython cimport array
+import array
+
+cdef array.array a = array.array('i', [1, 2, 3])
+cdef int[:] ca = a
+
+cdef int overhead(object a):
+ cdef int[:] ca = a
+ return ca[0]
+
+cdef int no_overhead(int[:] ca):
+ return ca[0]
+
+print(overhead(a)) # new memory view will be constructed, overhead
+print(no_overhead(ca)) # ca is already a memory view, so no overhead
diff -Nru cython-0.26.1/docs/examples/tutorial/array/resize.pyx cython-0.29.14/docs/examples/tutorial/array/resize.pyx
--- cython-0.26.1/docs/examples/tutorial/array/resize.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/array/resize.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,10 @@
+from cpython cimport array
+import array
+
+cdef array.array a = array.array('i', [1, 2, 3])
+cdef array.array b = array.array('i', [4, 5, 6])
+
+# extend a with b, resize as needed
+array.extend(a, b)
+# resize a, leaving just original three elements
+array.resize(a, len(a) - len(b))
diff -Nru cython-0.26.1/docs/examples/tutorial/array/safe_usage.pyx cython-0.29.14/docs/examples/tutorial/array/safe_usage.pyx
--- cython-0.26.1/docs/examples/tutorial/array/safe_usage.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/array/safe_usage.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,6 @@
+from cpython cimport array
+import array
+cdef array.array a = array.array('i', [1, 2, 3])
+cdef int[:] ca = a
+
+print(ca[0])
diff -Nru cython-0.26.1/docs/examples/tutorial/array/unsafe_usage.pyx cython-0.29.14/docs/examples/tutorial/array/unsafe_usage.pyx
--- cython-0.26.1/docs/examples/tutorial/array/unsafe_usage.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/array/unsafe_usage.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,11 @@
+from cpython cimport array
+import array
+
+cdef array.array a = array.array('i', [1, 2, 3])
+
+# access underlying pointer:
+print(a.data.as_ints[0])
+
+from libc.string cimport memset
+
+memset(a.data.as_voidptr, 0, len(a) * sizeof(int))
diff -Nru cython-0.26.1/docs/examples/tutorial/cdef_classes/integrate.pyx cython-0.29.14/docs/examples/tutorial/cdef_classes/integrate.pyx
--- cython-0.26.1/docs/examples/tutorial/cdef_classes/integrate.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/cdef_classes/integrate.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,14 @@
+from sin_of_square cimport Function, SinOfSquareFunction
+
+def integrate(Function f, double a, double b, int N):
+ cdef int i
+ cdef double s, dx
+ if f is None:
+ raise ValueError("f cannot be None")
+ s = 0
+ dx = (b - a) / N
+ for i in range(N):
+ s += f.evaluate(a + i * dx)
+ return s * dx
+
+print(integrate(SinOfSquareFunction(), 0, 1, 10000))
diff -Nru cython-0.26.1/docs/examples/tutorial/cdef_classes/math_function_2.pyx cython-0.29.14/docs/examples/tutorial/cdef_classes/math_function_2.pyx
--- cython-0.26.1/docs/examples/tutorial/cdef_classes/math_function_2.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/cdef_classes/math_function_2.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,3 @@
+cdef class Function:
+ cpdef double evaluate(self, double x) except *:
+ return 0
diff -Nru cython-0.26.1/docs/examples/tutorial/cdef_classes/math_function.py cython-0.29.14/docs/examples/tutorial/cdef_classes/math_function.py
--- cython-0.26.1/docs/examples/tutorial/cdef_classes/math_function.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/cdef_classes/math_function.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,7 @@
+class MathFunction(object):
+ def __init__(self, name, operator):
+ self.name = name
+ self.operator = operator
+
+ def __call__(self, *operands):
+ return self.operator(*operands)
diff -Nru cython-0.26.1/docs/examples/tutorial/cdef_classes/nonecheck.pyx cython-0.29.14/docs/examples/tutorial/cdef_classes/nonecheck.pyx
--- cython-0.26.1/docs/examples/tutorial/cdef_classes/nonecheck.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/cdef_classes/nonecheck.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,19 @@
+# cython: nonecheck=True
+# ^^^ Turns on nonecheck globally
+
+import cython
+
+cdef class MyClass:
+ pass
+
+# Turn off nonecheck locally for the function
+@cython.nonecheck(False)
+def func():
+ cdef MyClass obj = None
+ try:
+ # Turn nonecheck on again for a block
+ with cython.nonecheck(True):
+ print(obj.myfunc()) # Raises exception
+ except AttributeError:
+ pass
+ print(obj.myfunc()) # Hope for a crash!
diff -Nru cython-0.26.1/docs/examples/tutorial/cdef_classes/sin_of_square.pxd cython-0.29.14/docs/examples/tutorial/cdef_classes/sin_of_square.pxd
--- cython-0.26.1/docs/examples/tutorial/cdef_classes/sin_of_square.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/cdef_classes/sin_of_square.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,5 @@
+cdef class Function:
+ cpdef double evaluate(self, double x) except *
+
+cdef class SinOfSquareFunction(Function):
+ cpdef double evaluate(self, double x) except *
diff -Nru cython-0.26.1/docs/examples/tutorial/cdef_classes/sin_of_square.pyx cython-0.29.14/docs/examples/tutorial/cdef_classes/sin_of_square.pyx
--- cython-0.26.1/docs/examples/tutorial/cdef_classes/sin_of_square.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/cdef_classes/sin_of_square.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,9 @@
+from libc.math cimport sin
+
+cdef class Function:
+ cpdef double evaluate(self, double x) except *:
+ return 0
+
+cdef class SinOfSquareFunction(Function):
+ cpdef double evaluate(self, double x) except *:
+ return sin(x ** 2)
diff -Nru cython-0.26.1/docs/examples/tutorial/cdef_classes/wave_function.pyx cython-0.29.14/docs/examples/tutorial/cdef_classes/wave_function.pyx
--- cython-0.26.1/docs/examples/tutorial/cdef_classes/wave_function.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/cdef_classes/wave_function.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,21 @@
+from sin_of_square cimport Function
+
+cdef class WaveFunction(Function):
+
+ # Not available in Python-space:
+ cdef double offset
+
+ # Available in Python-space:
+ cdef public double freq
+
+ # Available in Python-space, but only for reading:
+ cdef readonly double scale
+
+ # Available in Python-space:
+ @property
+ def period(self):
+ return 1.0 / self.freq
+
+ @period.setter
+ def period(self, value):
+ self.freq = 1.0 / value
diff -Nru cython-0.26.1/docs/examples/tutorial/clibraries/c-algorithms/src/queue.h cython-0.29.14/docs/examples/tutorial/clibraries/c-algorithms/src/queue.h
--- cython-0.26.1/docs/examples/tutorial/clibraries/c-algorithms/src/queue.h 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/clibraries/c-algorithms/src/queue.h 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,17 @@
+/* queue.h */
+
+typedef struct _Queue Queue;
+typedef void *QueueValue;
+
+Queue *queue_new(void);
+void queue_free(Queue *queue);
+
+int queue_push_head(Queue *queue, QueueValue data);
+QueueValue queue_pop_head(Queue *queue);
+QueueValue queue_peek_head(Queue *queue);
+
+int queue_push_tail(Queue *queue, QueueValue data);
+QueueValue queue_pop_tail(Queue *queue);
+QueueValue queue_peek_tail(Queue *queue);
+
+int queue_is_empty(Queue *queue);
diff -Nru cython-0.26.1/docs/examples/tutorial/clibraries/cqueue.pxd cython-0.29.14/docs/examples/tutorial/clibraries/cqueue.pxd
--- cython-0.26.1/docs/examples/tutorial/clibraries/cqueue.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/clibraries/cqueue.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,19 @@
+# cqueue.pxd
+
+cdef extern from "c-algorithms/src/queue.h":
+ ctypedef struct Queue:
+ pass
+ ctypedef void* QueueValue
+
+ Queue* queue_new()
+ void queue_free(Queue* queue)
+
+ int queue_push_head(Queue* queue, QueueValue data)
+ QueueValue queue_pop_head(Queue* queue)
+ QueueValue queue_peek_head(Queue* queue)
+
+ int queue_push_tail(Queue* queue, QueueValue data)
+ QueueValue queue_pop_tail(Queue* queue)
+ QueueValue queue_peek_tail(Queue* queue)
+
+ bint queue_is_empty(Queue* queue)
diff -Nru cython-0.26.1/docs/examples/tutorial/clibraries/queue2.pyx cython-0.29.14/docs/examples/tutorial/clibraries/queue2.pyx
--- cython-0.26.1/docs/examples/tutorial/clibraries/queue2.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/clibraries/queue2.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,11 @@
+# queue.pyx
+
+cimport cqueue
+
+cdef class Queue:
+ cdef cqueue.Queue* _c_queue
+
+ def __cinit__(self):
+ self._c_queue = cqueue.queue_new()
+ if self._c_queue is NULL:
+ raise MemoryError()
diff -Nru cython-0.26.1/docs/examples/tutorial/clibraries/queue3.pyx cython-0.29.14/docs/examples/tutorial/clibraries/queue3.pyx
--- cython-0.26.1/docs/examples/tutorial/clibraries/queue3.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/clibraries/queue3.pyx 2019-02-27 12:23:19.000000000 +0000
@@ -0,0 +1,61 @@
+# queue.pyx
+
+cimport cqueue
+
+cdef class Queue:
+ """A queue class for C integer values.
+
+ >>> q = Queue()
+ >>> q.append(5)
+ >>> q.peek()
+ 5
+ >>> q.pop()
+ 5
+ """
+ cdef cqueue.Queue* _c_queue
+ def __cinit__(self):
+ self._c_queue = cqueue.queue_new()
+ if self._c_queue is NULL:
+ raise MemoryError()
+
+ def __dealloc__(self):
+ if self._c_queue is not NULL:
+ cqueue.queue_free(self._c_queue)
+
+ cpdef append(self, int value):
+ if not cqueue.queue_push_tail(self._c_queue,
+ value):
+ raise MemoryError()
+
+ # The `cpdef` feature is obviously not available for the original "extend()"
+ # method, as the method signature is incompatible with Python argument
+ # types (Python does not have pointers). However, we can rename
+ # the C-ish "extend()" method to e.g. "extend_ints()", and write
+ # a new "extend()" method that provides a suitable Python interface by
+ # accepting an arbitrary Python iterable.
+ cpdef extend(self, values):
+ for value in values:
+ self.append(value)
+
+ cdef extend_ints(self, int* values, size_t count):
+ cdef int value
+ for value in values[:count]: # Slicing pointer to limit the iteration boundaries.
+ self.append(value)
+
+ cpdef int peek(self) except? -1:
+ cdef int value = cqueue.queue_peek_head(self._c_queue)
+
+ if value == 0:
+ # this may mean that the queue is empty,
+ # or that it happens to contain a 0 value
+ if cqueue.queue_is_empty(self._c_queue):
+ raise IndexError("Queue is empty")
+ return value
+
+ cpdef int pop(self) except? -1:
+ if cqueue.queue_is_empty(self._c_queue):
+ raise IndexError("Queue is empty")
+ return cqueue.queue_pop_head(self._c_queue)
+
+ def __bool__(self):
+ return not cqueue.queue_is_empty(self._c_queue)
diff -Nru cython-0.26.1/docs/examples/tutorial/clibraries/queue.pyx cython-0.29.14/docs/examples/tutorial/clibraries/queue.pyx
--- cython-0.26.1/docs/examples/tutorial/clibraries/queue.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/clibraries/queue.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,9 @@
+# queue.pyx
+
+cimport cqueue
+
+cdef class Queue:
+ cdef cqueue.Queue* _c_queue
+
+ def __cinit__(self):
+ self._c_queue = cqueue.queue_new()
diff -Nru cython-0.26.1/docs/examples/tutorial/clibraries/test_queue.py cython-0.29.14/docs/examples/tutorial/clibraries/test_queue.py
--- cython-0.26.1/docs/examples/tutorial/clibraries/test_queue.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/clibraries/test_queue.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,36 @@
+from __future__ import print_function
+
+import time
+
+import queue
+
+Q = queue.Queue()
+
+Q.append(10)
+Q.append(20)
+print(Q.peek())
+print(Q.pop())
+print(Q.pop())
+try:
+ print(Q.pop())
+except IndexError as e:
+ print("Error message:", e) # Prints "Queue is empty"
+
+i = 10000
+
+values = range(i)
+
+start_time = time.time()
+
+Q.extend(values)
+
+end_time = time.time() - start_time
+
+print("Adding {} items took {:1.3f} msecs.".format(i, 1000 * end_time))
+
+for i in range(41):
+ Q.pop()
+
+Q.pop()
+print("The answer is:")
+print(Q.pop())
diff -Nru cython-0.26.1/docs/examples/tutorial/cython_tutorial/fib.pyx cython-0.29.14/docs/examples/tutorial/cython_tutorial/fib.pyx
--- cython-0.26.1/docs/examples/tutorial/cython_tutorial/fib.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/cython_tutorial/fib.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,10 @@
+from __future__ import print_function
+
+def fib(n):
+ """Print the Fibonacci series up to n."""
+ a, b = 0, 1
+ while b < n:
+ print(b, end=' ')
+ a, b = b, a + b
+
+ print()
diff -Nru cython-0.26.1/docs/examples/tutorial/cython_tutorial/primes_cpp.pyx cython-0.29.14/docs/examples/tutorial/cython_tutorial/primes_cpp.pyx
--- cython-0.26.1/docs/examples/tutorial/cython_tutorial/primes_cpp.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/cython_tutorial/primes_cpp.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,21 @@
+# distutils: language=c++
+
+from libcpp.vector cimport vector
+
+def primes(unsigned int nb_primes):
+ cdef int n, i
+ cdef vector[int] p
+ p.reserve(nb_primes) # allocate memory for 'nb_primes' elements.
+
+ n = 2
+ while p.size() < nb_primes: # size() for vectors is similar to len()
+ for i in p:
+ if n % i == 0:
+ break
+ else:
+ p.push_back(n) # push_back is similar to append()
+ n += 1
+
+ # Vectors are automatically converted to Python
+ # lists when converted to Python objects.
+ return p
diff -Nru cython-0.26.1/docs/examples/tutorial/cython_tutorial/primes_python.py cython-0.29.14/docs/examples/tutorial/cython_tutorial/primes_python.py
--- cython-0.26.1/docs/examples/tutorial/cython_tutorial/primes_python.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/cython_tutorial/primes_python.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,14 @@
+def primes_python(nb_primes):
+ p = []
+ n = 2
+ while len(p) < nb_primes:
+ # Is n prime?
+ for i in p:
+ if n % i == 0:
+ break
+
+ # If no break occurred in the loop
+ else:
+ p.append(n)
+ n += 1
+ return p
diff -Nru cython-0.26.1/docs/examples/tutorial/cython_tutorial/primes.pyx cython-0.29.14/docs/examples/tutorial/cython_tutorial/primes.pyx
--- cython-0.26.1/docs/examples/tutorial/cython_tutorial/primes.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/cython_tutorial/primes.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,23 @@
+def primes(int nb_primes):
+ cdef int n, i, len_p
+ cdef int p[1000]
+ if nb_primes > 1000:
+ nb_primes = 1000
+
+ len_p = 0 # The current number of elements in p.
+ n = 2
+ while len_p < nb_primes:
+ # Is n prime?
+ for i in p[:len_p]:
+ if n % i == 0:
+ break
+
+ # If no break occurred in the loop, we have a prime.
+ else:
+ p[len_p] = n
+ len_p += 1
+ n += 1
+
+ # Let's return the result in a python list:
+ result_as_list = [prime for prime in p[:len_p]]
+ return result_as_list
diff -Nru cython-0.26.1/docs/examples/tutorial/cython_tutorial/setup.py cython-0.29.14/docs/examples/tutorial/cython_tutorial/setup.py
--- cython-0.26.1/docs/examples/tutorial/cython_tutorial/setup.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/cython_tutorial/setup.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,6 @@
+from distutils.core import setup
+from Cython.Build import cythonize
+
+setup(
+ ext_modules=cythonize("fib.pyx"),
+)
diff -Nru cython-0.26.1/docs/examples/tutorial/external/atoi.pyx cython-0.29.14/docs/examples/tutorial/external/atoi.pyx
--- cython-0.26.1/docs/examples/tutorial/external/atoi.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/external/atoi.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,5 @@
+from libc.stdlib cimport atoi
+
+cdef parse_charptr_to_py_int(char* s):
+ assert s is not NULL, "byte string value is NULL"
+ return atoi(s) # note: atoi() has no error detection!
diff -Nru cython-0.26.1/docs/examples/tutorial/external/cpdef_sin.pyx cython-0.29.14/docs/examples/tutorial/external/cpdef_sin.pyx
--- cython-0.26.1/docs/examples/tutorial/external/cpdef_sin.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/external/cpdef_sin.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,7 @@
+"""
+>>> sin(0)
+0.0
+"""
+
+cdef extern from "math.h":
+ cpdef double sin(double x)
diff -Nru cython-0.26.1/docs/examples/tutorial/external/keyword_args_call.pyx cython-0.29.14/docs/examples/tutorial/external/keyword_args_call.pyx
--- cython-0.26.1/docs/examples/tutorial/external/keyword_args_call.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/external/keyword_args_call.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,7 @@
+cdef extern from "string.h":
+ char* strstr(const char *haystack, const char *needle)
+
+cdef char* data = "hfvcakdfagbcffvschvxcdfgccbcfhvgcsnfxjh"
+
+cdef char* pos = strstr(needle='akd', haystack=data)
+print(pos is not NULL)
diff -Nru cython-0.26.1/docs/examples/tutorial/external/keyword_args.pyx cython-0.29.14/docs/examples/tutorial/external/keyword_args.pyx
--- cython-0.26.1/docs/examples/tutorial/external/keyword_args.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/external/keyword_args.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,2 @@
+cdef extern from "string.h":
+ char* strstr(const char *haystack, const char *needle)
diff -Nru cython-0.26.1/docs/examples/tutorial/external/libc_sin.pyx cython-0.29.14/docs/examples/tutorial/external/libc_sin.pyx
--- cython-0.26.1/docs/examples/tutorial/external/libc_sin.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/external/libc_sin.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,4 @@
+from libc.math cimport sin
+
+cdef double f(double x):
+ return sin(x * x)
diff -Nru cython-0.26.1/docs/examples/tutorial/external/py_version_hex.pyx cython-0.29.14/docs/examples/tutorial/external/py_version_hex.pyx
--- cython-0.26.1/docs/examples/tutorial/external/py_version_hex.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/external/py_version_hex.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,4 @@
+from cpython.version cimport PY_VERSION_HEX
+
+# Python version >= 3.2 final ?
+print(PY_VERSION_HEX >= 0x030200F0)
diff -Nru cython-0.26.1/docs/examples/tutorial/external/setup.py cython-0.29.14/docs/examples/tutorial/external/setup.py
--- cython-0.26.1/docs/examples/tutorial/external/setup.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/external/setup.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,13 @@
+from distutils.core import setup
+from distutils.extension import Extension
+from Cython.Build import cythonize
+
+ext_modules = [
+ Extension("demo",
+ sources=["demo.pyx"],
+ libraries=["m"] # Unix-like specific
+ )
+]
+
+setup(name="Demos",
+ ext_modules=cythonize(ext_modules))
diff -Nru cython-0.26.1/docs/examples/tutorial/fib1/fib.pyx cython-0.29.14/docs/examples/tutorial/fib1/fib.pyx
--- cython-0.26.1/docs/examples/tutorial/fib1/fib.pyx 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/fib1/fib.pyx 1970-01-01 00:00:00.000000000 +0000
@@ -1,6 +0,0 @@
-def fib(n):
- """Print the Fibonacci series up to n."""
- a, b = 0, 1
- while b < n:
- print b,
- a, b = b, a + b
diff -Nru cython-0.26.1/docs/examples/tutorial/fib1/setup.py cython-0.29.14/docs/examples/tutorial/fib1/setup.py
--- cython-0.26.1/docs/examples/tutorial/fib1/setup.py 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/fib1/setup.py 1970-01-01 00:00:00.000000000 +0000
@@ -1,6 +0,0 @@
-from distutils.core import setup
-from Cython.Build import cythonize
-
-setup(
- ext_modules=cythonize("fib.pyx"),
-)
diff -Nru cython-0.26.1/docs/examples/tutorial/great_circle/c1.pyx cython-0.29.14/docs/examples/tutorial/great_circle/c1.pyx
--- cython-0.26.1/docs/examples/tutorial/great_circle/c1.pyx 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/great_circle/c1.pyx 1970-01-01 00:00:00.000000000 +0000
@@ -1,12 +0,0 @@
-import math
-
-def great_circle(lon1, lat1, lon2, lat2):
- radius = 3956 # miles
- x = math.pi/180.0
-
- a = (90.0 - lat1)*x
- b = (90.0 - lat2)*x
- theta = (lon2 - lon1)*x
- c = math.acos(math.cos(a)*math.cos(b) + math.sin(a)*math.sin(b)*math.cos(theta))
-
- return radius*c
diff -Nru cython-0.26.1/docs/examples/tutorial/great_circle/c2.pyx cython-0.29.14/docs/examples/tutorial/great_circle/c2.pyx
--- cython-0.26.1/docs/examples/tutorial/great_circle/c2.pyx 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/great_circle/c2.pyx 1970-01-01 00:00:00.000000000 +0000
@@ -1,13 +0,0 @@
-import math
-
-def great_circle(double lon1, double lat1, double lon2, double lat2):
- cdef double radius = 3956 # miles
- cdef double x = math.pi/180.0
- cdef double a, b, theta, c
-
- a = (90.0 - lat1)*x
- b = (90.0 - lat2)*x
- theta = (lon2 - lon1)*x
- c = math.acos(math.cos(a)*math.cos(b) + math.sin(a)*math.sin(b)*math.cos(theta))
-
- return radius*c
diff -Nru cython-0.26.1/docs/examples/tutorial/great_circle/p1.py cython-0.29.14/docs/examples/tutorial/great_circle/p1.py
--- cython-0.26.1/docs/examples/tutorial/great_circle/p1.py 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/great_circle/p1.py 1970-01-01 00:00:00.000000000 +0000
@@ -1,12 +0,0 @@
-import math
-
-def great_circle(lon1, lat1, lon2, lat2):
- radius = 3956 # miles
- x = math.pi/180.0
-
- a = (90.0 - lat1)*x
- b = (90.0 - lat2)*x
- theta = (lon2 - lon1)*x
- c = math.acos(math.cos(a)*math.cos(b) + math.sin(a)*math.sin(b)*math.cos(theta))
-
- return radius*c
diff -Nru cython-0.26.1/docs/examples/tutorial/memory_allocation/malloc.pyx cython-0.29.14/docs/examples/tutorial/memory_allocation/malloc.pyx
--- cython-0.26.1/docs/examples/tutorial/memory_allocation/malloc.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/memory_allocation/malloc.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,23 @@
+import random
+from libc.stdlib cimport malloc, free
+
+def random_noise(int number=1):
+ cdef int i
+ # allocate number * sizeof(double) bytes of memory
+ cdef double *my_array = malloc(number * sizeof(double))
+ if not my_array:
+ raise MemoryError()
+
+ try:
+ ran = random.normalvariate
+ for i in range(number):
+ my_array[i] = ran(0, 1)
+
+ # ... let's just assume we do some more heavy C calculations here to make up
+ # for the work that it takes to pack the C double values into Python float
+ # objects below, right after throwing away the existing objects above.
+
+ return [x for x in my_array[:number]]
+ finally:
+ # return the previously allocated memory to the system
+ free(my_array)
diff -Nru cython-0.26.1/docs/examples/tutorial/memory_allocation/some_memory.pyx cython-0.29.14/docs/examples/tutorial/memory_allocation/some_memory.pyx
--- cython-0.26.1/docs/examples/tutorial/memory_allocation/some_memory.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/memory_allocation/some_memory.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,25 @@
+from cpython.mem cimport PyMem_Malloc, PyMem_Realloc, PyMem_Free
+
+cdef class SomeMemory:
+
+ cdef double* data
+
+ def __cinit__(self, size_t number):
+ # allocate some memory (uninitialised, may contain arbitrary data)
+ self.data = PyMem_Malloc(number * sizeof(double))
+ if not self.data:
+ raise MemoryError()
+
+ def resize(self, size_t new_number):
+ # Allocates new_number * sizeof(double) bytes,
+ # preserving the current content and making a best-effort to
+ # re-use the original data location.
+ mem = PyMem_Realloc(self.data, new_number * sizeof(double))
+ if not mem:
+ raise MemoryError()
+ # Only overwrite the pointer if the memory was really reallocated.
+ # On error (mem is NULL), the originally memory has not been freed.
+ self.data = mem
+
+ def __dealloc__(self):
+ PyMem_Free(self.data) # no-op if self.data is NULL
diff -Nru cython-0.26.1/docs/examples/tutorial/numpy/convolve2.pyx cython-0.29.14/docs/examples/tutorial/numpy/convolve2.pyx
--- cython-0.26.1/docs/examples/tutorial/numpy/convolve2.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/numpy/convolve2.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,79 @@
+# tag: numpy_old
+# You can ignore the previous line.
+# It's for internal testing of the cython documentation.
+
+import numpy as np
+
+# "cimport" is used to import special compile-time information
+# about the numpy module (this is stored in a file numpy.pxd which is
+# currently part of the Cython distribution).
+cimport numpy as np
+
+# We now need to fix a datatype for our arrays. I've used the variable
+# DTYPE for this, which is assigned to the usual NumPy runtime
+# type info object.
+DTYPE = np.int
+
+# "ctypedef" assigns a corresponding compile-time type to DTYPE_t. For
+# every type in the numpy module there's a corresponding compile-time
+# type with a _t-suffix.
+ctypedef np.int_t DTYPE_t
+
+# "def" can type its arguments but not have a return type. The type of the
+# arguments for a "def" function is checked at run-time when entering the
+# function.
+#
+# The arrays f, g and h is typed as "np.ndarray" instances. The only effect
+# this has is to a) insert checks that the function arguments really are
+# NumPy arrays, and b) make some attribute access like f.shape[0] much
+# more efficient. (In this example this doesn't matter though.)
+def naive_convolve(np.ndarray f, np.ndarray g):
+ if g.shape[0] % 2 != 1 or g.shape[1] % 2 != 1:
+ raise ValueError("Only odd dimensions on filter supported")
+ assert f.dtype == DTYPE and g.dtype == DTYPE
+
+ # The "cdef" keyword is also used within functions to type variables. It
+ # can only be used at the top indentation level (there are non-trivial
+ # problems with allowing them in other places, though we'd love to see
+ # good and thought out proposals for it).
+ #
+ # For the indices, the "int" type is used. This corresponds to a C int,
+ # other C types (like "unsigned int") could have been used instead.
+ # Purists could use "Py_ssize_t" which is the proper Python type for
+ # array indices.
+ cdef int vmax = f.shape[0]
+ cdef int wmax = f.shape[1]
+ cdef int smax = g.shape[0]
+ cdef int tmax = g.shape[1]
+ cdef int smid = smax // 2
+ cdef int tmid = tmax // 2
+ cdef int xmax = vmax + 2 * smid
+ cdef int ymax = wmax + 2 * tmid
+ cdef np.ndarray h = np.zeros([xmax, ymax], dtype=DTYPE)
+ cdef int x, y, s, t, v, w
+
+ # It is very important to type ALL your variables. You do not get any
+ # warnings if not, only much slower code (they are implicitly typed as
+ # Python objects).
+ cdef int s_from, s_to, t_from, t_to
+
+ # For the value variable, we want to use the same data type as is
+ # stored in the array, so we use "DTYPE_t" as defined above.
+ # NB! An important side-effect of this is that if "value" overflows its
+ # datatype size, it will simply wrap around like in C, rather than raise
+ # an error like in Python.
+ cdef DTYPE_t value
+ for x in range(xmax):
+ for y in range(ymax):
+ s_from = max(smid - x, -smid)
+ s_to = min((xmax - x) - smid, smid + 1)
+ t_from = max(tmid - y, -tmid)
+ t_to = min((ymax - y) - tmid, tmid + 1)
+ value = 0
+ for s in range(s_from, s_to):
+ for t in range(t_from, t_to):
+ v = x - smid + s
+ w = y - tmid + t
+ value += g[smid - s, tmid - t] * f[v, w]
+ h[x, y] = value
+ return h
diff -Nru cython-0.26.1/docs/examples/tutorial/numpy/convolve_py.py cython-0.29.14/docs/examples/tutorial/numpy/convolve_py.py
--- cython-0.26.1/docs/examples/tutorial/numpy/convolve_py.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/numpy/convolve_py.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,43 @@
+import numpy as np
+
+
+def naive_convolve(f, g):
+ # f is an image and is indexed by (v, w)
+ # g is a filter kernel and is indexed by (s, t),
+ # it needs odd dimensions
+ # h is the output image and is indexed by (x, y),
+ # it is not cropped
+ if g.shape[0] % 2 != 1 or g.shape[1] % 2 != 1:
+ raise ValueError("Only odd dimensions on filter supported")
+ # smid and tmid are number of pixels between the center pixel
+ # and the edge, ie for a 5x5 filter they will be 2.
+ #
+ # The output size is calculated by adding smid, tmid to each
+ # side of the dimensions of the input image.
+ vmax = f.shape[0]
+ wmax = f.shape[1]
+ smax = g.shape[0]
+ tmax = g.shape[1]
+ smid = smax // 2
+ tmid = tmax // 2
+ xmax = vmax + 2 * smid
+ ymax = wmax + 2 * tmid
+ # Allocate result image.
+ h = np.zeros([xmax, ymax], dtype=f.dtype)
+ # Do convolution
+ for x in range(xmax):
+ for y in range(ymax):
+ # Calculate pixel value for h at (x,y). Sum one component
+ # for each pixel (s, t) of the filter g.
+ s_from = max(smid - x, -smid)
+ s_to = min((xmax - x) - smid, smid + 1)
+ t_from = max(tmid - y, -tmid)
+ t_to = min((ymax - y) - tmid, tmid + 1)
+ value = 0
+ for s in range(s_from, s_to):
+ for t in range(t_from, t_to):
+ v = x - smid + s
+ w = y - tmid + t
+ value += g[smid - s, tmid - t] * f[v, w]
+ h[x, y] = value
+ return h
diff -Nru cython-0.26.1/docs/examples/tutorial/primes/primes.py cython-0.29.14/docs/examples/tutorial/primes/primes.py
--- cython-0.26.1/docs/examples/tutorial/primes/primes.py 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/primes/primes.py 1970-01-01 00:00:00.000000000 +0000
@@ -1,19 +0,0 @@
-
-def primes(kmax):
- result = []
- if kmax > 1000:
- kmax = 1000
-
- p = [0] * 1000
- k = 0
- n = 2
- while k < kmax:
- i = 0
- while i < k and n % p[i] != 0:
- i += 1
- if i == k:
- p[k] = n
- k += 1
- result.append(n)
- n += 1
- return result
diff -Nru cython-0.26.1/docs/examples/tutorial/primes/primes.pyx cython-0.29.14/docs/examples/tutorial/primes/primes.pyx
--- cython-0.26.1/docs/examples/tutorial/primes/primes.pyx 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/primes/primes.pyx 1970-01-01 00:00:00.000000000 +0000
@@ -1,19 +0,0 @@
-def primes(int kmax):
- cdef int n, k, i
- cdef int p[1000]
- result = []
- if kmax > 1000:
- kmax = 1000
- k = 0
- n = 2
- while k < kmax:
- i = 0
- while i < k and n % p[i] != 0:
- i = i + 1
- if i == k:
- p[k] = n
- k = k + 1
- result.append(n)
- n = n + 1
- return result
-
diff -Nru cython-0.26.1/docs/examples/tutorial/primes/setup.py cython-0.29.14/docs/examples/tutorial/primes/setup.py
--- cython-0.26.1/docs/examples/tutorial/primes/setup.py 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/primes/setup.py 1970-01-01 00:00:00.000000000 +0000
@@ -1,6 +0,0 @@
-from distutils.core import setup
-from Cython.Build import cythonize
-
-setup(
- ext_modules=cythonize("primes.pyx"),
-)
diff -Nru cython-0.26.1/docs/examples/tutorial/profiling_tutorial/calc_pi_2.pyx cython-0.29.14/docs/examples/tutorial/profiling_tutorial/calc_pi_2.pyx
--- cython-0.26.1/docs/examples/tutorial/profiling_tutorial/calc_pi_2.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/profiling_tutorial/calc_pi_2.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,13 @@
+# cython: profile=True
+
+# calc_pi.pyx
+
+def recip_square(int i):
+ return 1. / i ** 2
+
+def approx_pi(int n=10000000):
+ cdef double val = 0.
+ cdef int k
+ for k in range(1, n + 1):
+ val += recip_square(k)
+ return (6 * val) ** .5
diff -Nru cython-0.26.1/docs/examples/tutorial/profiling_tutorial/calc_pi_3.pyx cython-0.29.14/docs/examples/tutorial/profiling_tutorial/calc_pi_3.pyx
--- cython-0.26.1/docs/examples/tutorial/profiling_tutorial/calc_pi_3.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/profiling_tutorial/calc_pi_3.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,13 @@
+# cython: profile=True
+
+# calc_pi.pyx
+
+cdef inline double recip_square(int i):
+ return 1. / (i * i)
+
+def approx_pi(int n=10000000):
+ cdef double val = 0.
+ cdef int k
+ for k in range(1, n + 1):
+ val += recip_square(k)
+ return (6 * val) ** .5
diff -Nru cython-0.26.1/docs/examples/tutorial/profiling_tutorial/calc_pi_4.pyx cython-0.29.14/docs/examples/tutorial/profiling_tutorial/calc_pi_4.pyx
--- cython-0.26.1/docs/examples/tutorial/profiling_tutorial/calc_pi_4.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/profiling_tutorial/calc_pi_4.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,16 @@
+# cython: profile=True
+
+# calc_pi.pyx
+
+cimport cython
+
+@cython.profile(False)
+cdef inline double recip_square(int i):
+ return 1. / (i * i)
+
+def approx_pi(int n=10000000):
+ cdef double val = 0.
+ cdef int k
+ for k in range(1, n + 1):
+ val += recip_square(k)
+ return (6 * val) ** .5
diff -Nru cython-0.26.1/docs/examples/tutorial/profiling_tutorial/calc_pi.py cython-0.29.14/docs/examples/tutorial/profiling_tutorial/calc_pi.py
--- cython-0.26.1/docs/examples/tutorial/profiling_tutorial/calc_pi.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/profiling_tutorial/calc_pi.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,10 @@
+# calc_pi.py
+
+def recip_square(i):
+ return 1. / i ** 2
+
+def approx_pi(n=10000000):
+ val = 0.
+ for k in range(1, n + 1):
+ val += recip_square(k)
+ return (6 * val) ** .5
diff -Nru cython-0.26.1/docs/examples/tutorial/profiling_tutorial/often_called.pyx cython-0.29.14/docs/examples/tutorial/profiling_tutorial/often_called.pyx
--- cython-0.26.1/docs/examples/tutorial/profiling_tutorial/often_called.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/profiling_tutorial/often_called.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,5 @@
+cimport cython
+
+@cython.profile(False)
+def my_often_called_function():
+ pass
diff -Nru cython-0.26.1/docs/examples/tutorial/profiling_tutorial/profile_2.py cython-0.29.14/docs/examples/tutorial/profiling_tutorial/profile_2.py
--- cython-0.26.1/docs/examples/tutorial/profiling_tutorial/profile_2.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/profiling_tutorial/profile_2.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,13 @@
+# profile.py
+
+import pstats, cProfile
+
+import pyximport
+pyximport.install()
+
+import calc_pi
+
+cProfile.runctx("calc_pi.approx_pi()", globals(), locals(), "Profile.prof")
+
+s = pstats.Stats("Profile.prof")
+s.strip_dirs().sort_stats("time").print_stats()
diff -Nru cython-0.26.1/docs/examples/tutorial/profiling_tutorial/profile.py cython-0.29.14/docs/examples/tutorial/profiling_tutorial/profile.py
--- cython-0.26.1/docs/examples/tutorial/profiling_tutorial/profile.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/profiling_tutorial/profile.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,10 @@
+# profile.py
+
+import pstats, cProfile
+
+import calc_pi
+
+cProfile.runctx("calc_pi.approx_pi()", globals(), locals(), "Profile.prof")
+
+s = pstats.Stats("Profile.prof")
+s.strip_dirs().sort_stats("time").print_stats()
diff -Nru cython-0.26.1/docs/examples/tutorial/pure/A_equivalent.pyx cython-0.29.14/docs/examples/tutorial/pure/A_equivalent.pyx
--- cython-0.26.1/docs/examples/tutorial/pure/A_equivalent.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/pure/A_equivalent.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,15 @@
+cpdef int myfunction(int x, int y=2):
+ a = x - y
+ return a + x * y
+
+cdef double _helper(double a):
+ return a + 1
+
+cdef class A:
+ cdef public int a, b
+ def __init__(self, b=0):
+ self.a = 3
+ self.b = b
+
+ cpdef foo(self, double x):
+ print(x + _helper(1.0))
diff -Nru cython-0.26.1/docs/examples/tutorial/pure/annotations.py cython-0.29.14/docs/examples/tutorial/pure/annotations.py
--- cython-0.26.1/docs/examples/tutorial/pure/annotations.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/pure/annotations.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,5 @@
+import cython
+
+def func(foo: dict, bar: cython.int) -> tuple:
+ foo["hello world"] = 3 + bar
+ return foo, 5
diff -Nru cython-0.26.1/docs/examples/tutorial/pure/A.pxd cython-0.29.14/docs/examples/tutorial/pure/A.pxd
--- cython-0.26.1/docs/examples/tutorial/pure/A.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/pure/A.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,6 @@
+cpdef int myfunction(int x, int y=*)
+cdef double _helper(double a)
+
+cdef class A:
+ cdef public int a, b
+ cpdef foo(self, double x)
diff -Nru cython-0.26.1/docs/examples/tutorial/pure/A.py cython-0.29.14/docs/examples/tutorial/pure/A.py
--- cython-0.26.1/docs/examples/tutorial/pure/A.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/pure/A.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,14 @@
+def myfunction(x, y=2):
+ a = x - y
+ return a + x * y
+
+def _helper(a):
+ return a + 1
+
+class A:
+ def __init__(self, b=0):
+ self.a = 3
+ self.b = b
+
+ def foo(self, x):
+ print(x + _helper(1.0))
diff -Nru cython-0.26.1/docs/examples/tutorial/pure/c_arrays.py cython-0.29.14/docs/examples/tutorial/pure/c_arrays.py
--- cython-0.26.1/docs/examples/tutorial/pure/c_arrays.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/pure/c_arrays.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,15 @@
+import cython
+
+
+@cython.locals(counts=cython.int[10], digit=cython.int)
+def count_digits(digits):
+ """
+ >>> digits = '01112222333334445667788899'
+ >>> count_digits(map(int, digits))
+ [1, 3, 4, 5, 3, 1, 2, 2, 3, 2]
+ """
+ counts = [0] * 10
+ for digit in digits:
+ assert 0 <= digit <= 9
+ counts[digit] += 1
+ return counts
diff -Nru cython-0.26.1/docs/examples/tutorial/pure/cclass.py cython-0.29.14/docs/examples/tutorial/pure/cclass.py
--- cython-0.26.1/docs/examples/tutorial/pure/cclass.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/pure/cclass.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,16 @@
+import cython
+
+
+@cython.cclass
+class A:
+ cython.declare(a=cython.int, b=cython.int)
+ c = cython.declare(cython.int, visibility='public')
+ d = cython.declare(cython.int) # private by default.
+ e = cython.declare(cython.int, visibility='readonly')
+
+ def __init__(self, a, b, c, d=5, e=3):
+ self.a = a
+ self.b = b
+ self.c = c
+ self.d = d
+ self.e = e
diff -Nru cython-0.26.1/docs/examples/tutorial/pure/compiled_switch.py cython-0.29.14/docs/examples/tutorial/pure/compiled_switch.py
--- cython-0.26.1/docs/examples/tutorial/pure/compiled_switch.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/pure/compiled_switch.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,6 @@
+import cython
+
+if cython.compiled:
+ print("Yep, I'm compiled.")
+else:
+ print("Just a lowly interpreted script.")
diff -Nru cython-0.26.1/docs/examples/tutorial/pure/cython_declare2.py cython-0.29.14/docs/examples/tutorial/pure/cython_declare2.py
--- cython-0.26.1/docs/examples/tutorial/pure/cython_declare2.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/pure/cython_declare2.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,3 @@
+import cython
+
+cython.declare(x=cython.int, y=cython.double) # cdef int x; cdef double y
diff -Nru cython-0.26.1/docs/examples/tutorial/pure/cython_declare.py cython-0.29.14/docs/examples/tutorial/pure/cython_declare.py
--- cython-0.26.1/docs/examples/tutorial/pure/cython_declare.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/pure/cython_declare.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,4 @@
+import cython
+
+x = cython.declare(cython.int) # cdef int x
+y = cython.declare(cython.double, 0.57721) # cdef double y = 0.57721
diff -Nru cython-0.26.1/docs/examples/tutorial/pure/dostuff.pxd cython-0.29.14/docs/examples/tutorial/pure/dostuff.pxd
--- cython-0.26.1/docs/examples/tutorial/pure/dostuff.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/pure/dostuff.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,4 @@
+import cython
+
+@cython.locals(t=cython.int, i=cython.int)
+cpdef int dostuff(int n)
diff -Nru cython-0.26.1/docs/examples/tutorial/pure/dostuff.py cython-0.29.14/docs/examples/tutorial/pure/dostuff.py
--- cython-0.26.1/docs/examples/tutorial/pure/dostuff.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/pure/dostuff.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,5 @@
+def dostuff(n):
+ t = 0
+ for i in range(n):
+ t += i
+ return t
diff -Nru cython-0.26.1/docs/examples/tutorial/pure/exceptval.py cython-0.29.14/docs/examples/tutorial/pure/exceptval.py
--- cython-0.26.1/docs/examples/tutorial/pure/exceptval.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/pure/exceptval.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,7 @@
+import cython
+
+@cython.exceptval(-1)
+def func(x: cython.int) -> cython.int:
+ if x < 0:
+ raise ValueError("need integer >= 0")
+ return x + 1
diff -Nru cython-0.26.1/docs/examples/tutorial/pure/locals.py cython-0.29.14/docs/examples/tutorial/pure/locals.py
--- cython-0.26.1/docs/examples/tutorial/pure/locals.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/pure/locals.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,6 @@
+import cython
+
+@cython.locals(a=cython.long, b=cython.long, n=cython.longlong)
+def foo(a, b, x, y):
+ n = a * b
+ # ...
diff -Nru cython-0.26.1/docs/examples/tutorial/pure/mymodule.pxd cython-0.29.14/docs/examples/tutorial/pure/mymodule.pxd
--- cython-0.26.1/docs/examples/tutorial/pure/mymodule.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/pure/mymodule.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,5 @@
+# mymodule.pxd
+
+# declare a C function as "cpdef" to export it to the module
+cdef extern from "math.h":
+ cpdef double sin(double x)
diff -Nru cython-0.26.1/docs/examples/tutorial/pure/mymodule.py cython-0.29.14/docs/examples/tutorial/pure/mymodule.py
--- cython-0.26.1/docs/examples/tutorial/pure/mymodule.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/pure/mymodule.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,10 @@
+# mymodule.py
+
+import cython
+
+# override with Python import if not in compiled code
+if not cython.compiled:
+ from math import sin
+
+# calls sin() from math.h when compiled with Cython and math.sin() in Python
+print(sin(0))
diff -Nru cython-0.26.1/docs/examples/tutorial/pure/pep_526.py cython-0.29.14/docs/examples/tutorial/pure/pep_526.py
--- cython-0.26.1/docs/examples/tutorial/pure/pep_526.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/pure/pep_526.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,22 @@
+import cython
+
+def func():
+ # Cython types are evaluated as for cdef declarations
+ x: cython.int # cdef int x
+ y: cython.double = 0.57721 # cdef double y = 0.57721
+ z: cython.float = 0.57721 # cdef float z = 0.57721
+
+ # Python types shadow Cython types for compatibility reasons
+ a: float = 0.54321 # cdef double a = 0.54321
+ b: int = 5 # cdef object b = 5
+ c: long = 6 # cdef object c = 6
+ pass
+
+@cython.cclass
+class A:
+ a: cython.int
+ b: cython.int
+
+ def __init__(self, b=0):
+ self.a = 3
+ self.b = b
diff -Nru cython-0.26.1/docs/examples/tutorial/string/api_func.pyx cython-0.29.14/docs/examples/tutorial/string/api_func.pyx
--- cython-0.26.1/docs/examples/tutorial/string/api_func.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/api_func.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,5 @@
+from to_unicode cimport _text
+
+def api_func(s):
+ text_input = _text(s)
+ # ...
diff -Nru cython-0.26.1/docs/examples/tutorial/string/arg_memview.pyx cython-0.29.14/docs/examples/tutorial/string/arg_memview.pyx
--- cython-0.26.1/docs/examples/tutorial/string/arg_memview.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/arg_memview.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,5 @@
+def process_byte_data(unsigned char[:] data):
+ length = data.shape[0]
+ first_byte = data[0]
+ slice_view = data[1:-1]
+ # ...
diff -Nru cython-0.26.1/docs/examples/tutorial/string/auto_conversion_1.pyx cython-0.29.14/docs/examples/tutorial/string/auto_conversion_1.pyx
--- cython-0.26.1/docs/examples/tutorial/string/auto_conversion_1.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/auto_conversion_1.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,9 @@
+# cython: c_string_type=unicode, c_string_encoding=utf8
+
+cdef char* c_string = 'abcdefg'
+
+# implicit decoding:
+cdef object py_unicode_object = c_string
+
+# explicit conversion to Python bytes:
+py_bytes_object = c_string
diff -Nru cython-0.26.1/docs/examples/tutorial/string/auto_conversion_2.pyx cython-0.29.14/docs/examples/tutorial/string/auto_conversion_2.pyx
--- cython-0.26.1/docs/examples/tutorial/string/auto_conversion_2.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/auto_conversion_2.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,12 @@
+# cython: c_string_type=str, c_string_encoding=ascii
+
+cdef char* c_string = 'abcdefg'
+
+# implicit decoding in Py3, bytes conversion in Py2:
+cdef object py_str_object = c_string
+
+# explicit conversion to Python bytes:
+py_bytes_object = c_string
+
+# explicit conversion to Python unicode:
+py_bytes_object = c_string
diff -Nru cython-0.26.1/docs/examples/tutorial/string/auto_conversion_3.pyx cython-0.29.14/docs/examples/tutorial/string/auto_conversion_3.pyx
--- cython-0.26.1/docs/examples/tutorial/string/auto_conversion_3.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/auto_conversion_3.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,6 @@
+# cython: c_string_type=unicode, c_string_encoding=ascii
+
+def func():
+ ustring = u'abc'
+ cdef char* s = ustring
+ return s[0] # returns u'a'
diff -Nru cython-0.26.1/docs/examples/tutorial/string/c_func.pxd cython-0.29.14/docs/examples/tutorial/string/c_func.pxd
--- cython-0.26.1/docs/examples/tutorial/string/c_func.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/c_func.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,2 @@
+cdef char* c_call_returning_a_c_string()
+cdef void get_a_c_string(char** c_string, Py_ssize_t *length)
diff -Nru cython-0.26.1/docs/examples/tutorial/string/c_func.pyx cython-0.29.14/docs/examples/tutorial/string/c_func.pyx
--- cython-0.26.1/docs/examples/tutorial/string/c_func.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/c_func.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,22 @@
+from libc.stdlib cimport malloc
+from libc.string cimport strcpy, strlen
+
+cdef char* hello_world = 'hello world'
+cdef Py_ssize_t n = strlen(hello_world)
+
+
+cdef char* c_call_returning_a_c_string():
+ cdef char* c_string = malloc((n + 1) * sizeof(char))
+ if not c_string:
+ raise MemoryError()
+ strcpy(c_string, hello_world)
+ return c_string
+
+
+cdef void get_a_c_string(char** c_string_ptr, Py_ssize_t *length):
+ c_string_ptr[0] = malloc((n + 1) * sizeof(char))
+ if not c_string_ptr[0]:
+ raise MemoryError()
+
+ strcpy(c_string_ptr[0], hello_world)
+ length[0] = n
diff -Nru cython-0.26.1/docs/examples/tutorial/string/const.pyx cython-0.29.14/docs/examples/tutorial/string/const.pyx
--- cython-0.26.1/docs/examples/tutorial/string/const.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/const.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,4 @@
+cdef extern from "someheader.h":
+ ctypedef const char specialChar
+ int process_string(const char* s)
+ const unsigned char* look_up_cached_string(const unsigned char* key)
diff -Nru cython-0.26.1/docs/examples/tutorial/string/cpp_string.pyx cython-0.29.14/docs/examples/tutorial/string/cpp_string.pyx
--- cython-0.26.1/docs/examples/tutorial/string/cpp_string.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/cpp_string.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,12 @@
+# distutils: language = c++
+
+from libcpp.string cimport string
+
+def get_bytes():
+ py_bytes_object = b'hello world'
+ cdef string s = py_bytes_object
+
+ s.append('abc')
+ py_bytes_object = s
+ return py_bytes_object
+
diff -Nru cython-0.26.1/docs/examples/tutorial/string/decode_cpp_string.pyx cython-0.29.14/docs/examples/tutorial/string/decode_cpp_string.pyx
--- cython-0.26.1/docs/examples/tutorial/string/decode_cpp_string.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/decode_cpp_string.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,10 @@
+# distutils: language = c++
+
+from libcpp.string cimport string
+
+def get_ustrings():
+ cdef string s = string(b'abcdefg')
+
+ ustring1 = s.decode('UTF-8')
+ ustring2 = s[2:-2].decode('UTF-8')
+ return ustring1, ustring2
diff -Nru cython-0.26.1/docs/examples/tutorial/string/decode.pyx cython-0.29.14/docs/examples/tutorial/string/decode.pyx
--- cython-0.26.1/docs/examples/tutorial/string/decode.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/decode.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,9 @@
+from c_func cimport get_a_c_string
+
+cdef char* c_string = NULL
+cdef Py_ssize_t length = 0
+
+# get pointer and length from a C function
+get_a_c_string(&c_string, &length)
+
+ustring = c_string[:length].decode('UTF-8')
diff -Nru cython-0.26.1/docs/examples/tutorial/string/for_bytes.pyx cython-0.29.14/docs/examples/tutorial/string/for_bytes.pyx
--- cython-0.26.1/docs/examples/tutorial/string/for_bytes.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/for_bytes.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,6 @@
+cdef bytes bytes_string = b"hello to A bytes' world"
+
+cdef char c
+for c in bytes_string:
+ if c == 'A':
+ print("Found the letter A")
diff -Nru cython-0.26.1/docs/examples/tutorial/string/for_char.pyx cython-0.29.14/docs/examples/tutorial/string/for_char.pyx
--- cython-0.26.1/docs/examples/tutorial/string/for_char.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/for_char.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,6 @@
+cdef char* c_string = "Hello to A C-string's world"
+
+cdef char c
+for c in c_string[:11]:
+ if c == 'A':
+ print("Found the letter A")
diff -Nru cython-0.26.1/docs/examples/tutorial/string/for_unicode.pyx cython-0.29.14/docs/examples/tutorial/string/for_unicode.pyx
--- cython-0.26.1/docs/examples/tutorial/string/for_unicode.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/for_unicode.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,6 @@
+cdef unicode ustring = u'Hello world'
+
+# NOTE: no typing required for 'uchar' !
+for uchar in ustring:
+ if uchar == u'A':
+ print("Found the letter A")
diff -Nru cython-0.26.1/docs/examples/tutorial/string/if_char_in.pyx cython-0.29.14/docs/examples/tutorial/string/if_char_in.pyx
--- cython-0.26.1/docs/examples/tutorial/string/if_char_in.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/if_char_in.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,5 @@
+cpdef void is_in(Py_UCS4 uchar_val):
+ if uchar_val in u'abcABCxY':
+ print("The character is in the string.")
+ else:
+ print("The character is not in the string")
diff -Nru cython-0.26.1/docs/examples/tutorial/string/naive_decode.pyx cython-0.29.14/docs/examples/tutorial/string/naive_decode.pyx
--- cython-0.26.1/docs/examples/tutorial/string/naive_decode.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/naive_decode.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,4 @@
+from c_func cimport c_call_returning_a_c_string
+
+cdef char* some_c_string = c_call_returning_a_c_string()
+ustring = some_c_string.decode('UTF-8')
diff -Nru cython-0.26.1/docs/examples/tutorial/string/return_memview.pyx cython-0.29.14/docs/examples/tutorial/string/return_memview.pyx
--- cython-0.26.1/docs/examples/tutorial/string/return_memview.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/return_memview.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,9 @@
+def process_byte_data(unsigned char[:] data):
+ # ... process the data, here, dummy processing.
+ cdef bint return_all = (data[0] == 108)
+
+ if return_all:
+ return bytes(data)
+ else:
+ # example for returning a slice
+ return bytes(data[5:7])
diff -Nru cython-0.26.1/docs/examples/tutorial/string/slicing_c_string.pyx cython-0.29.14/docs/examples/tutorial/string/slicing_c_string.pyx
--- cython-0.26.1/docs/examples/tutorial/string/slicing_c_string.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/slicing_c_string.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,15 @@
+from libc.stdlib cimport free
+from c_func cimport get_a_c_string
+
+
+def main():
+ cdef char* c_string = NULL
+ cdef Py_ssize_t length = 0
+
+ # get pointer and length from a C function
+ get_a_c_string(&c_string, &length)
+
+ try:
+ py_bytes_string = c_string[:length] # Performs a copy of the data
+ finally:
+ free(c_string)
diff -Nru cython-0.26.1/docs/examples/tutorial/string/someheader.h cython-0.29.14/docs/examples/tutorial/string/someheader.h
--- cython-0.26.1/docs/examples/tutorial/string/someheader.h 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/someheader.h 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,3 @@
+typedef const char specialChar;
+int process_string(const char* s);
+const unsigned char* look_up_cached_string(const unsigned char* key);
diff -Nru cython-0.26.1/docs/examples/tutorial/string/to_char.pyx cython-0.29.14/docs/examples/tutorial/string/to_char.pyx
--- cython-0.26.1/docs/examples/tutorial/string/to_char.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/to_char.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,8 @@
+# define a global name for whatever char type is used in the module
+ctypedef unsigned char char_type
+
+cdef char_type[:] _chars(s):
+ if isinstance(s, unicode):
+ # encode to the specific encoding used inside of the module
+ s = (s).encode('utf8')
+ return s
diff -Nru cython-0.26.1/docs/examples/tutorial/string/to_unicode.pxd cython-0.29.14/docs/examples/tutorial/string/to_unicode.pxd
--- cython-0.26.1/docs/examples/tutorial/string/to_unicode.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/to_unicode.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1 @@
+cdef unicode _text(s)
diff -Nru cython-0.26.1/docs/examples/tutorial/string/to_unicode.pyx cython-0.29.14/docs/examples/tutorial/string/to_unicode.pyx
--- cython-0.26.1/docs/examples/tutorial/string/to_unicode.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/to_unicode.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,22 @@
+# to_unicode.pyx
+
+from cpython.version cimport PY_MAJOR_VERSION
+
+cdef unicode _text(s):
+ if type(s) is unicode:
+ # Fast path for most common case(s).
+ return s
+
+ elif PY_MAJOR_VERSION < 3 and isinstance(s, bytes):
+ # Only accept byte strings as text input in Python 2.x, not in Py3.
+ return (s).decode('ascii')
+
+ elif isinstance(s, unicode):
+ # We know from the fast path above that 's' can only be a subtype here.
+ # An evil cast to might still work in some(!) cases,
+ # depending on what the further processing does. To be safe,
+ # we can always create a copy instead.
+ return unicode(s)
+
+ else:
+ raise TypeError("Could not convert to unicode.")
diff -Nru cython-0.26.1/docs/examples/tutorial/string/try_finally.pyx cython-0.29.14/docs/examples/tutorial/string/try_finally.pyx
--- cython-0.26.1/docs/examples/tutorial/string/try_finally.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/try_finally.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,9 @@
+from libc.stdlib cimport free
+from c_func cimport c_call_returning_a_c_string
+
+cdef bytes py_string
+cdef char* c_string = c_call_returning_a_c_string()
+try:
+ py_string = c_string
+finally:
+ free(c_string)
diff -Nru cython-0.26.1/docs/examples/tutorial/string/utf_eight.pyx cython-0.29.14/docs/examples/tutorial/string/utf_eight.pyx
--- cython-0.26.1/docs/examples/tutorial/string/utf_eight.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/tutorial/string/utf_eight.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,15 @@
+from libc.stdlib cimport free
+
+cdef unicode tounicode(char* s):
+ return s.decode('UTF-8', 'strict')
+
+cdef unicode tounicode_with_length(
+ char* s, size_t length):
+ return s[:length].decode('UTF-8', 'strict')
+
+cdef unicode tounicode_with_length_and_free(
+ char* s, size_t length):
+ try:
+ return s[:length].decode('UTF-8', 'strict')
+ finally:
+ free(s)
\ No newline at end of file
diff -Nru cython-0.26.1/docs/examples/userguide/buffer/matrix.pyx cython-0.29.14/docs/examples/userguide/buffer/matrix.pyx
--- cython-0.26.1/docs/examples/userguide/buffer/matrix.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/buffer/matrix.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,16 @@
+# distutils: language = c++
+
+# matrix.pyx
+
+from libcpp.vector cimport vector
+
+cdef class Matrix:
+ cdef unsigned ncols
+ cdef vector[float] v
+
+ def __cinit__(self, unsigned ncols):
+ self.ncols = ncols
+
+ def add_row(self):
+ """Adds a row, initially zero-filled."""
+ self.v.resize(self.v.size() + self.ncols)
diff -Nru cython-0.26.1/docs/examples/userguide/buffer/matrix_with_buffer.pyx cython-0.29.14/docs/examples/userguide/buffer/matrix_with_buffer.pyx
--- cython-0.26.1/docs/examples/userguide/buffer/matrix_with_buffer.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/buffer/matrix_with_buffer.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,45 @@
+# distutils: language = c++
+
+from cpython cimport Py_buffer
+from libcpp.vector cimport vector
+
+cdef class Matrix:
+ cdef Py_ssize_t ncols
+ cdef Py_ssize_t shape[2]
+ cdef Py_ssize_t strides[2]
+ cdef vector[float] v
+
+ def __cinit__(self, Py_ssize_t ncols):
+ self.ncols = ncols
+
+ def add_row(self):
+ """Adds a row, initially zero-filled."""
+ self.v.resize(self.v.size() + self.ncols)
+
+ def __getbuffer__(self, Py_buffer *buffer, int flags):
+ cdef Py_ssize_t itemsize = sizeof(self.v[0])
+
+ self.shape[0] = self.v.size() / self.ncols
+ self.shape[1] = self.ncols
+
+ # Stride 1 is the distance, in bytes, between two items in a row;
+ # this is the distance between two adjacent items in the vector.
+ # Stride 0 is the distance between the first elements of adjacent rows.
+ self.strides[1] = ( &(self.v[1])
+ - &(self.v[0]))
+ self.strides[0] = self.ncols * self.strides[1]
+
+ buffer.buf = &(self.v[0])
+ buffer.format = 'f' # float
+ buffer.internal = NULL # see References
+ buffer.itemsize = itemsize
+ buffer.len = self.v.size() * itemsize # product(shape) * itemsize
+ buffer.ndim = 2
+ buffer.obj = self
+ buffer.readonly = 0
+ buffer.shape = self.shape
+ buffer.strides = self.strides
+ buffer.suboffsets = NULL # for pointer arrays only
+
+ def __releasebuffer__(self, Py_buffer *buffer):
+ pass
diff -Nru cython-0.26.1/docs/examples/userguide/buffer/view_count.pyx cython-0.29.14/docs/examples/userguide/buffer/view_count.pyx
--- cython-0.26.1/docs/examples/userguide/buffer/view_count.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/buffer/view_count.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,29 @@
+# distutils: language = c++
+
+from cpython cimport Py_buffer
+from libcpp.vector cimport vector
+
+cdef class Matrix:
+
+ cdef int view_count
+
+ cdef Py_ssize_t ncols
+ cdef vector[float] v
+ # ...
+
+ def __cinit__(self, Py_ssize_t ncols):
+ self.ncols = ncols
+ self.view_count = 0
+
+ def add_row(self):
+ if self.view_count > 0:
+ raise ValueError("can't add row while being viewed")
+ self.v.resize(self.v.size() + self.ncols)
+
+ def __getbuffer__(self, Py_buffer *buffer, int flags):
+ # ... as before
+
+ self.view_count += 1
+
+ def __releasebuffer__(self, Py_buffer *buffer):
+ self.view_count -= 1
\ No newline at end of file
diff -Nru cython-0.26.1/docs/examples/userguide/early_binding_for_speed/rectangle_cdef.pyx cython-0.29.14/docs/examples/userguide/early_binding_for_speed/rectangle_cdef.pyx
--- cython-0.26.1/docs/examples/userguide/early_binding_for_speed/rectangle_cdef.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/early_binding_for_speed/rectangle_cdef.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,22 @@
+cdef class Rectangle:
+ cdef int x0, y0
+ cdef int x1, y1
+
+ def __init__(self, int x0, int y0, int x1, int y1):
+ self.x0 = x0
+ self.y0 = y0
+ self.x1 = x1
+ self.y1 = y1
+
+ cdef int _area(self):
+ area = (self.x1 - self.x0) * (self.y1 - self.y0)
+ if area < 0:
+ area = -area
+ return area
+
+ def area(self):
+ return self._area()
+
+def rectArea(x0, y0, x1, y1):
+ rect = Rectangle(x0, y0, x1, y1)
+ return rect.area()
diff -Nru cython-0.26.1/docs/examples/userguide/early_binding_for_speed/rectangle_cpdef.pyx cython-0.29.14/docs/examples/userguide/early_binding_for_speed/rectangle_cpdef.pyx
--- cython-0.26.1/docs/examples/userguide/early_binding_for_speed/rectangle_cpdef.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/early_binding_for_speed/rectangle_cpdef.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,19 @@
+cdef class Rectangle:
+ cdef int x0, y0
+ cdef int x1, y1
+
+ def __init__(self, int x0, int y0, int x1, int y1):
+ self.x0 = x0
+ self.y0 = y0
+ self.x1 = x1
+ self.y1 = y1
+
+ cpdef int area(self):
+ area = (self.x1 - self.x0) * (self.y1 - self.y0)
+ if area < 0:
+ area = -area
+ return area
+
+def rectArea(x0, y0, x1, y1):
+ rect = Rectangle(x0, y0, x1, y1)
+ return rect.area()
diff -Nru cython-0.26.1/docs/examples/userguide/early_binding_for_speed/rectangle.pyx cython-0.29.14/docs/examples/userguide/early_binding_for_speed/rectangle.pyx
--- cython-0.26.1/docs/examples/userguide/early_binding_for_speed/rectangle.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/early_binding_for_speed/rectangle.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,19 @@
+cdef class Rectangle:
+ cdef int x0, y0
+ cdef int x1, y1
+
+ def __init__(self, int x0, int y0, int x1, int y1):
+ self.x0 = x0
+ self.y0 = y0
+ self.x1 = x1
+ self.y1 = y1
+
+ def area(self):
+ area = (self.x1 - self.x0) * (self.y1 - self.y0)
+ if area < 0:
+ area = -area
+ return area
+
+def rectArea(x0, y0, x1, y1):
+ rect = Rectangle(x0, y0, x1, y1)
+ return rect.area()
diff -Nru cython-0.26.1/docs/examples/userguide/extension_types/dict_animal.pyx cython-0.29.14/docs/examples/userguide/extension_types/dict_animal.pyx
--- cython-0.26.1/docs/examples/userguide/extension_types/dict_animal.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/extension_types/dict_animal.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,11 @@
+cdef class Animal:
+
+ cdef int number_of_legs
+ cdef dict __dict__
+
+ def __cinit__(self, int number_of_legs):
+ self.number_of_legs = number_of_legs
+
+
+dog = Animal(4)
+dog.has_tail = True
diff -Nru cython-0.26.1/docs/examples/userguide/extension_types/extendable_animal.pyx cython-0.29.14/docs/examples/userguide/extension_types/extendable_animal.pyx
--- cython-0.26.1/docs/examples/userguide/extension_types/extendable_animal.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/extension_types/extendable_animal.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,14 @@
+cdef class Animal:
+
+ cdef int number_of_legs
+
+ def __cinit__(self, int number_of_legs):
+ self.number_of_legs = number_of_legs
+
+
+class ExtendableAnimal(Animal): # Note that we use class, not cdef class
+ pass
+
+
+dog = ExtendableAnimal(4)
+dog.has_tail = True
\ No newline at end of file
diff -Nru cython-0.26.1/docs/examples/userguide/extension_types/my_module.pxd cython-0.29.14/docs/examples/userguide/extension_types/my_module.pxd
--- cython-0.26.1/docs/examples/userguide/extension_types/my_module.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/extension_types/my_module.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,2 @@
+cdef class Shrubbery:
+ cdef int width, height
diff -Nru cython-0.26.1/docs/examples/userguide/extension_types/my_module.pyx cython-0.29.14/docs/examples/userguide/extension_types/my_module.pyx
--- cython-0.26.1/docs/examples/userguide/extension_types/my_module.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/extension_types/my_module.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,11 @@
+from __future__ import print_function
+
+cdef class Shrubbery:
+
+ def __init__(self, w, h):
+ self.width = w
+ self.height = h
+
+ def describe(self):
+ print("This shrubbery is", self.width,
+ "by", self.height, "cubits.")
diff -Nru cython-0.26.1/docs/examples/userguide/extension_types/python_access.pyx cython-0.29.14/docs/examples/userguide/extension_types/python_access.pyx
--- cython-0.26.1/docs/examples/userguide/extension_types/python_access.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/extension_types/python_access.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,3 @@
+cdef class Shrubbery:
+ cdef public int width, height
+ cdef readonly float depth
diff -Nru cython-0.26.1/docs/examples/userguide/extension_types/shrubbery_2.pyx cython-0.29.14/docs/examples/userguide/extension_types/shrubbery_2.pyx
--- cython-0.26.1/docs/examples/userguide/extension_types/shrubbery_2.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/extension_types/shrubbery_2.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,8 @@
+from my_module cimport Shrubbery
+
+cdef Shrubbery another_shrubbery(Shrubbery sh1):
+ cdef Shrubbery sh2
+ sh2 = Shrubbery()
+ sh2.width = sh1.width
+ sh2.height = sh1.height
+ return sh2
diff -Nru cython-0.26.1/docs/examples/userguide/extension_types/shrubbery.pyx cython-0.29.14/docs/examples/userguide/extension_types/shrubbery.pyx
--- cython-0.26.1/docs/examples/userguide/extension_types/shrubbery.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/extension_types/shrubbery.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,12 @@
+from __future__ import print_function
+
+cdef class Shrubbery:
+ cdef int width, height
+
+ def __init__(self, w, h):
+ self.width = w
+ self.height = h
+
+ def describe(self):
+ print("This shrubbery is", self.width,
+ "by", self.height, "cubits.")
diff -Nru cython-0.26.1/docs/examples/userguide/extension_types/widen_shrubbery.pyx cython-0.29.14/docs/examples/userguide/extension_types/widen_shrubbery.pyx
--- cython-0.26.1/docs/examples/userguide/extension_types/widen_shrubbery.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/extension_types/widen_shrubbery.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,4 @@
+from my_module cimport Shrubbery
+
+cdef widen_shrubbery(Shrubbery sh, extra_width):
+ sh.width = sh.width + extra_width
diff -Nru cython-0.26.1/docs/examples/userguide/external_C_code/c_code_docstring.pyx cython-0.29.14/docs/examples/userguide/external_C_code/c_code_docstring.pyx
--- cython-0.26.1/docs/examples/userguide/external_C_code/c_code_docstring.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/external_C_code/c_code_docstring.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,9 @@
+cdef extern from *:
+ """
+ /* This is C code which will be put
+ * in the .c file output by Cython */
+ static long square(long x) {return x * x;}
+ #define assign(x, y) ((x) = (y))
+ """
+ long square(long x)
+ void assign(long& x, long y)
diff -Nru cython-0.26.1/docs/examples/userguide/external_C_code/delorean.pyx cython-0.29.14/docs/examples/userguide/external_C_code/delorean.pyx
--- cython-0.26.1/docs/examples/userguide/external_C_code/delorean.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/external_C_code/delorean.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,9 @@
+# delorean.pyx
+
+cdef public struct Vehicle:
+ int speed
+ float power
+
+cdef api void activate(Vehicle *v):
+ if v.speed >= 88 and v.power >= 1.21:
+ print("Time travel achieved")
\ No newline at end of file
diff -Nru cython-0.26.1/docs/examples/userguide/external_C_code/marty.c cython-0.29.14/docs/examples/userguide/external_C_code/marty.c
--- cython-0.26.1/docs/examples/userguide/external_C_code/marty.c 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/external_C_code/marty.c 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,13 @@
+# marty.c
+#include "delorean_api.h"
+
+Vehicle car;
+
+int main(int argc, char *argv[]) {
+ Py_Initialize();
+ import_delorean();
+ car.speed = atoi(argv[1]);
+ car.power = atof(argv[2]);
+ activate(&car);
+ Py_Finalize();
+}
diff -Nru cython-0.26.1/docs/examples/userguide/fusedtypes/char_or_float.pyx cython-0.29.14/docs/examples/userguide/fusedtypes/char_or_float.pyx
--- cython-0.26.1/docs/examples/userguide/fusedtypes/char_or_float.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/fusedtypes/char_or_float.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,17 @@
+from __future__ import print_function
+
+ctypedef fused char_or_float:
+ char
+ float
+
+
+cpdef char_or_float plus_one(char_or_float var):
+ return var + 1
+
+
+def show_me():
+ cdef:
+ char a = 127
+ float b = 127
+ print('char', plus_one(a))
+ print('float', plus_one(b))
diff -Nru cython-0.26.1/docs/examples/userguide/language_basics/casting_python.pyx cython-0.29.14/docs/examples/userguide/language_basics/casting_python.pyx
--- cython-0.26.1/docs/examples/userguide/language_basics/casting_python.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/language_basics/casting_python.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,19 @@
+from cpython.ref cimport PyObject
+
+cdef extern from *:
+ ctypedef Py_ssize_t Py_intptr_t
+
+python_string = "foo"
+
+cdef void* ptr = python_string
+cdef Py_intptr_t adress_in_c = ptr
+address_from_void = adress_in_c # address_from_void is a python int
+
+cdef PyObject* ptr2 = python_string
+cdef Py_intptr_t address_in_c2 = ptr2
+address_from_PyObject = address_in_c2 # address_from_PyObject is a python int
+
+assert address_from_void == address_from_PyObject == id(python_string)
+
+print(ptr) # Prints "foo"
+print(ptr2) # prints "foo"
diff -Nru cython-0.26.1/docs/examples/userguide/language_basics/cdef_block.pyx cython-0.29.14/docs/examples/userguide/language_basics/cdef_block.pyx
--- cython-0.26.1/docs/examples/userguide/language_basics/cdef_block.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/language_basics/cdef_block.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,12 @@
+from __future__ import print_function
+
+cdef:
+ struct Spam:
+ int tons
+
+ int i
+ float a
+ Spam *p
+
+ void f(Spam *s):
+ print(s.tons, "Tons of spam")
diff -Nru cython-0.26.1/docs/examples/userguide/language_basics/compile_time.pyx cython-0.29.14/docs/examples/userguide/language_basics/compile_time.pyx
--- cython-0.26.1/docs/examples/userguide/language_basics/compile_time.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/language_basics/compile_time.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,9 @@
+from __future__ import print_function
+
+DEF FavouriteFood = u"spam"
+DEF ArraySize = 42
+DEF OtherArraySize = 2 * ArraySize + 17
+
+cdef int a1[ArraySize]
+cdef int a2[OtherArraySize]
+print("I like", FavouriteFood)
\ No newline at end of file
diff -Nru cython-0.26.1/docs/examples/userguide/language_basics/kwargs_1.pyx cython-0.29.14/docs/examples/userguide/language_basics/kwargs_1.pyx
--- cython-0.26.1/docs/examples/userguide/language_basics/kwargs_1.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/language_basics/kwargs_1.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,6 @@
+def f(a, b, *args, c, d = 42, e, **kwds):
+ ...
+
+
+# We cannot call f with less verbosity than this.
+foo = f(4, "bar", c=68, e=1.0)
diff -Nru cython-0.26.1/docs/examples/userguide/language_basics/kwargs_2.pyx cython-0.29.14/docs/examples/userguide/language_basics/kwargs_2.pyx
--- cython-0.26.1/docs/examples/userguide/language_basics/kwargs_2.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/language_basics/kwargs_2.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,5 @@
+def g(a, b, *, c, d):
+ ...
+
+# We cannot call g with less verbosity than this.
+foo = g(4.0, "something", c=68, d="other")
diff -Nru cython-0.26.1/docs/examples/userguide/language_basics/open_file.pyx cython-0.29.14/docs/examples/userguide/language_basics/open_file.pyx
--- cython-0.26.1/docs/examples/userguide/language_basics/open_file.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/language_basics/open_file.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,18 @@
+from libc.stdio cimport FILE, fopen
+from libc.stdlib cimport malloc, free
+from cpython.exc cimport PyErr_SetFromErrnoWithFilenameObject
+
+def open_file():
+ cdef FILE* p
+ p = fopen("spam.txt", "r")
+ if p is NULL:
+ PyErr_SetFromErrnoWithFilenameObject(OSError, "spam.txt")
+ ...
+
+
+def allocating_memory(number=10):
+ cdef double *my_array = malloc(number * sizeof(double))
+ if not my_array: # same as 'is NULL' above
+ raise MemoryError()
+ ...
+ free(my_array)
diff -Nru cython-0.26.1/docs/examples/userguide/language_basics/optional_subclassing.pxd cython-0.29.14/docs/examples/userguide/language_basics/optional_subclassing.pxd
--- cython-0.26.1/docs/examples/userguide/language_basics/optional_subclassing.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/language_basics/optional_subclassing.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,8 @@
+cdef class A:
+ cdef foo(self)
+
+cdef class B(A):
+ cdef foo(self, x=*)
+
+cdef class C(B):
+ cpdef foo(self, x=*, int k=*)
diff -Nru cython-0.26.1/docs/examples/userguide/language_basics/optional_subclassing.pyx cython-0.29.14/docs/examples/userguide/language_basics/optional_subclassing.pyx
--- cython-0.26.1/docs/examples/userguide/language_basics/optional_subclassing.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/language_basics/optional_subclassing.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,13 @@
+from __future__ import print_function
+
+cdef class A:
+ cdef foo(self):
+ print("A")
+
+cdef class B(A):
+ cdef foo(self, x=None):
+ print("B", x)
+
+cdef class C(B):
+ cpdef foo(self, x=True, int k=3):
+ print("C", x, k)
diff -Nru cython-0.26.1/docs/examples/userguide/language_basics/override.pyx cython-0.29.14/docs/examples/userguide/language_basics/override.pyx
--- cython-0.26.1/docs/examples/userguide/language_basics/override.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/language_basics/override.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,13 @@
+from __future__ import print_function
+
+cdef class A:
+ cdef foo(self):
+ print("A")
+
+cdef class B(A):
+ cpdef foo(self):
+ print("B")
+
+class C(B): # NOTE: not cdef class
+ def foo(self):
+ print("C")
diff -Nru cython-0.26.1/docs/examples/userguide/language_basics/struct_union_enum.pyx cython-0.29.14/docs/examples/userguide/language_basics/struct_union_enum.pyx
--- cython-0.26.1/docs/examples/userguide/language_basics/struct_union_enum.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/language_basics/struct_union_enum.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,16 @@
+cdef struct Grail:
+ int age
+ float volume
+
+cdef union Food:
+ char *spam
+ float *eggs
+
+cdef enum CheeseType:
+ cheddar, edam,
+ camembert
+
+cdef enum CheeseState:
+ hard = 1
+ soft = 2
+ runny = 3
diff -Nru cython-0.26.1/docs/examples/userguide/memoryviews/add_one.pyx cython-0.29.14/docs/examples/userguide/memoryviews/add_one.pyx
--- cython-0.26.1/docs/examples/userguide/memoryviews/add_one.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/memoryviews/add_one.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,12 @@
+import numpy as np
+
+def add_one(int[:,:] buf):
+ for x in range(buf.shape[0]):
+ for y in range(buf.shape[1]):
+ buf[x, y] += 1
+
+# exporting_object must be a Python object
+# implementing the buffer interface, e.g. a numpy array.
+exporting_object = np.zeros((10, 20), dtype=np.intc)
+
+add_one(exporting_object)
diff -Nru cython-0.26.1/docs/examples/userguide/memoryviews/C_func_file.c cython-0.29.14/docs/examples/userguide/memoryviews/C_func_file.c
--- cython-0.26.1/docs/examples/userguide/memoryviews/C_func_file.c 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/memoryviews/C_func_file.c 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,9 @@
+#include "C_func_file.h"
+
+void multiply_by_10_in_C(double arr[], unsigned int n)
+{
+ unsigned int i;
+ for (i = 0; i < n; i++) {
+ arr[i] *= 10;
+ }
+}
diff -Nru cython-0.26.1/docs/examples/userguide/memoryviews/C_func_file.h cython-0.29.14/docs/examples/userguide/memoryviews/C_func_file.h
--- cython-0.26.1/docs/examples/userguide/memoryviews/C_func_file.h 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/memoryviews/C_func_file.h 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,6 @@
+#ifndef C_FUNC_FILE_H
+#define C_FUNC_FILE_H
+
+void multiply_by_10_in_C(double arr[], unsigned int n);
+
+#endif
diff -Nru cython-0.26.1/docs/examples/userguide/memoryviews/copy.pyx cython-0.29.14/docs/examples/userguide/memoryviews/copy.pyx
--- cython-0.26.1/docs/examples/userguide/memoryviews/copy.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/memoryviews/copy.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,12 @@
+import numpy as np
+
+cdef int[:, :, :] to_view, from_view
+to_view = np.empty((20, 15, 30), dtype=np.intc)
+from_view = np.ones((20, 15, 30), dtype=np.intc)
+
+# copy the elements in from_view to to_view
+to_view[...] = from_view
+# or
+to_view[:] = from_view
+# or
+to_view[:, :, :] = from_view
diff -Nru cython-0.26.1/docs/examples/userguide/memoryviews/memory_layout_2.pyx cython-0.29.14/docs/examples/userguide/memoryviews/memory_layout_2.pyx
--- cython-0.26.1/docs/examples/userguide/memoryviews/memory_layout_2.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/memoryviews/memory_layout_2.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,6 @@
+from cython cimport view
+
+# VALID
+cdef int[::view.indirect, ::1, :] a
+cdef int[::view.indirect, :, ::1] b
+cdef int[::view.indirect_contiguous, ::1, :] c
diff -Nru cython-0.26.1/docs/examples/userguide/memoryviews/memory_layout.pyx cython-0.29.14/docs/examples/userguide/memoryviews/memory_layout.pyx
--- cython-0.26.1/docs/examples/userguide/memoryviews/memory_layout.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/memoryviews/memory_layout.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,11 @@
+from cython cimport view
+
+# direct access in both dimensions, strided in the first dimension, contiguous in the last
+cdef int[:, ::view.contiguous] a
+
+# contiguous list of pointers to contiguous lists of ints
+cdef int[::view.indirect_contiguous, ::1] b
+
+# direct or indirect in the first dimension, direct in the second dimension
+# strided in both dimensions
+cdef int[::view.generic, :] c
diff -Nru cython-0.26.1/docs/examples/userguide/memoryviews/memview_to_c.pyx cython-0.29.14/docs/examples/userguide/memoryviews/memview_to_c.pyx
--- cython-0.26.1/docs/examples/userguide/memoryviews/memview_to_c.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/memoryviews/memview_to_c.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,28 @@
+cdef extern from "C_func_file.c":
+ # C is include here so that it doesn't need to be compiled externally
+ pass
+
+cdef extern from "C_func_file.h":
+ void multiply_by_10_in_C(double *, unsigned int)
+
+import numpy as np
+
+def multiply_by_10(arr): # 'arr' is a one-dimensional numpy array
+
+ if not arr.flags['C_CONTIGUOUS']:
+ arr = np.ascontiguousarray(arr) # Makes a contiguous copy of the numpy array.
+
+ cdef double[::1] arr_memview = arr
+
+ multiply_by_10_in_C(&arr_memview[0], arr_memview.shape[0])
+
+ return arr
+
+
+a = np.ones(5, dtype=np.double)
+print(multiply_by_10(a))
+
+b = np.ones(10, dtype=np.double)
+b = b[::2] # b is not contiguous.
+
+print(multiply_by_10(b)) # but our function still works as expected.
diff -Nru cython-0.26.1/docs/examples/userguide/memoryviews/not_none.pyx cython-0.29.14/docs/examples/userguide/memoryviews/not_none.pyx
--- cython-0.26.1/docs/examples/userguide/memoryviews/not_none.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/memoryviews/not_none.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,11 @@
+import numpy as np
+
+def process_buffer(int[:,:] input_view not None,
+ int[:,:] output_view=None):
+
+ if output_view is None:
+ # Creating a default view, e.g.
+ output_view = np.empty_like(input_view)
+
+ # process 'input_view' into 'output_view'
+ return output_view
diff -Nru cython-0.26.1/docs/examples/userguide/memoryviews/np_flag_const.pyx cython-0.29.14/docs/examples/userguide/memoryviews/np_flag_const.pyx
--- cython-0.26.1/docs/examples/userguide/memoryviews/np_flag_const.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/memoryviews/np_flag_const.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,7 @@
+import numpy as np
+
+cdef const double[:] myslice # const item type => read-only view
+
+a = np.linspace(0, 10, num=50)
+a.setflags(write=False)
+myslice = a
diff -Nru cython-0.26.1/docs/examples/userguide/memoryviews/quickstart.pyx cython-0.29.14/docs/examples/userguide/memoryviews/quickstart.pyx
--- cython-0.26.1/docs/examples/userguide/memoryviews/quickstart.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/memoryviews/quickstart.pyx 2019-02-27 12:23:19.000000000 +0000
@@ -0,0 +1,52 @@
+from cython.view cimport array as cvarray
+import numpy as np
+
+# Memoryview on a NumPy array
+narr = np.arange(27, dtype=np.dtype("i")).reshape((3, 3, 3))
+cdef int [:, :, :] narr_view = narr
+
+# Memoryview on a C array
+cdef int carr[3][3][3]
+cdef int [:, :, :] carr_view = carr
+
+# Memoryview on a Cython array
+cyarr = cvarray(shape=(3, 3, 3), itemsize=sizeof(int), format="i")
+cdef int [:, :, :] cyarr_view = cyarr
+
+# Show the sum of all the arrays before altering it
+print("NumPy sum of the NumPy array before assignments: %s" % narr.sum())
+
+# We can copy the values from one memoryview into another using a single
+# statement, by either indexing with ... or (NumPy-style) with a colon.
+carr_view[...] = narr_view
+cyarr_view[:] = narr_view
+# NumPy-style syntax for assigning a single value to all elements.
+narr_view[:, :, :] = 3
+
+# Just to distinguish the arrays
+carr_view[0, 0, 0] = 100
+cyarr_view[0, 0, 0] = 1000
+
+# Assigning into the memoryview on the NumPy array alters the latter
+print("NumPy sum of NumPy array after assignments: %s" % narr.sum())
+
+# A function using a memoryview does not usually need the GIL
+cpdef int sum3d(int[:, :, :] arr) nogil:
+ cdef size_t i, j, k, I, J, K
+ cdef int total = 0
+ I = arr.shape[0]
+ J = arr.shape[1]
+ K = arr.shape[2]
+ for i in range(I):
+ for j in range(J):
+ for k in range(K):
+ total += arr[i, j, k]
+ return total
+
+# A function accepting a memoryview knows how to use a NumPy array,
+# a C array, a Cython array...
+print("Memoryview sum of NumPy array is %s" % sum3d(narr))
+print("Memoryview sum of C array is %s" % sum3d(carr))
+print("Memoryview sum of Cython array is %s" % sum3d(cyarr))
+# ... and of course, a memoryview.
+print("Memoryview sum of C memoryview is %s" % sum3d(carr_view))
diff -Nru cython-0.26.1/docs/examples/userguide/memoryviews/slicing.pyx cython-0.29.14/docs/examples/userguide/memoryviews/slicing.pyx
--- cython-0.26.1/docs/examples/userguide/memoryviews/slicing.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/memoryviews/slicing.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,10 @@
+import numpy as np
+
+exporting_object = np.arange(0, 15 * 10 * 20, dtype=np.intc).reshape((15, 10, 20))
+
+cdef int[:, :, :] my_view = exporting_object
+
+# These are all equivalent
+my_view[10]
+my_view[10, :, :]
+my_view[10, ...]
diff -Nru cython-0.26.1/docs/examples/userguide/memoryviews/transpose.pyx cython-0.29.14/docs/examples/userguide/memoryviews/transpose.pyx
--- cython-0.26.1/docs/examples/userguide/memoryviews/transpose.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/memoryviews/transpose.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,6 @@
+import numpy as np
+
+array = np.arange(20, dtype=np.intc).reshape((2, 10))
+
+cdef int[:, ::1] c_contig = array
+cdef int[::1, :] f_contig = c_contig.T
diff -Nru cython-0.26.1/docs/examples/userguide/memoryviews/view_string.pyx cython-0.29.14/docs/examples/userguide/memoryviews/view_string.pyx
--- cython-0.26.1/docs/examples/userguide/memoryviews/view_string.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/memoryviews/view_string.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,9 @@
+cdef bint is_y_in(const unsigned char[:] string_view):
+ cdef int i
+ for i in range(string_view.shape[0]):
+ if string_view[i] == b'y':
+ return True
+ return False
+
+print(is_y_in(b'hello world')) # False
+print(is_y_in(b'hello Cython')) # True
diff -Nru cython-0.26.1/docs/examples/userguide/numpy_tutorial/compute_fused_types.pyx cython-0.29.14/docs/examples/userguide/numpy_tutorial/compute_fused_types.pyx
--- cython-0.26.1/docs/examples/userguide/numpy_tutorial/compute_fused_types.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/numpy_tutorial/compute_fused_types.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,44 @@
+# cython: infer_types=True
+import numpy as np
+cimport cython
+
+ctypedef fused my_type:
+ int
+ double
+ long long
+
+
+cdef my_type clip(my_type a, my_type min_value, my_type max_value):
+ return min(max(a, min_value), max_value)
+
+
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def compute(my_type[:, ::1] array_1, my_type[:, ::1] array_2, my_type a, my_type b, my_type c):
+
+ x_max = array_1.shape[0]
+ y_max = array_1.shape[1]
+
+ assert tuple(array_1.shape) == tuple(array_2.shape)
+
+ if my_type is int:
+ dtype = np.intc
+ elif my_type is double:
+ dtype = np.double
+ elif my_type is cython.longlong:
+ dtype = np.longlong
+
+ result = np.zeros((x_max, y_max), dtype=dtype)
+ cdef my_type[:, ::1] result_view = result
+
+ cdef my_type tmp
+ cdef Py_ssize_t x, y
+
+ for x in range(x_max):
+ for y in range(y_max):
+
+ tmp = clip(array_1[x, y], 2, 10)
+ tmp = tmp * a + array_2[x, y] * b
+ result_view[x, y] = tmp + c
+
+ return result
diff -Nru cython-0.26.1/docs/examples/userguide/numpy_tutorial/compute_infer_types.pyx cython-0.29.14/docs/examples/userguide/numpy_tutorial/compute_infer_types.pyx
--- cython-0.26.1/docs/examples/userguide/numpy_tutorial/compute_infer_types.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/numpy_tutorial/compute_infer_types.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,34 @@
+# cython: infer_types=True
+import numpy as np
+cimport cython
+
+DTYPE = np.intc
+
+
+cdef int clip(int a, int min_value, int max_value):
+ return min(max(a, min_value), max_value)
+
+
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def compute(int[:, ::1] array_1, int[:, ::1] array_2, int a, int b, int c):
+
+ x_max = array_1.shape[0]
+ y_max = array_1.shape[1]
+
+ assert tuple(array_1.shape) == tuple(array_2.shape)
+
+ result = np.zeros((x_max, y_max), dtype=DTYPE)
+ cdef int[:, ::1] result_view = result
+
+ cdef int tmp
+ cdef Py_ssize_t x, y
+
+ for x in range(x_max):
+ for y in range(y_max):
+
+ tmp = clip(array_1[x, y], 2, 10)
+ tmp = tmp * a + array_2[x, y] * b
+ result_view[x, y] = tmp + c
+
+ return result
diff -Nru cython-0.26.1/docs/examples/userguide/numpy_tutorial/compute_memview.pyx cython-0.29.14/docs/examples/userguide/numpy_tutorial/compute_memview.pyx
--- cython-0.26.1/docs/examples/userguide/numpy_tutorial/compute_memview.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/numpy_tutorial/compute_memview.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,34 @@
+import numpy as np
+
+DTYPE = np.intc
+
+
+cdef int clip(int a, int min_value, int max_value):
+ return min(max(a, min_value), max_value)
+
+
+def compute(int[:, :] array_1, int[:, :] array_2, int a, int b, int c):
+
+ cdef Py_ssize_t x_max = array_1.shape[0]
+ cdef Py_ssize_t y_max = array_1.shape[1]
+
+ # array_1.shape is now a C array, no it's not possible
+ # to compare it simply by using == without a for-loop.
+ # To be able to compare it to array_2.shape easily,
+ # we convert them both to Python tuples.
+ assert tuple(array_1.shape) == tuple(array_2.shape)
+
+ result = np.zeros((x_max, y_max), dtype=DTYPE)
+ cdef int[:, :] result_view = result
+
+ cdef int tmp
+ cdef Py_ssize_t x, y
+
+ for x in range(x_max):
+ for y in range(y_max):
+
+ tmp = clip(array_1[x, y], 2, 10)
+ tmp = tmp * a + array_2[x, y] * b
+ result_view[x, y] = tmp + c
+
+ return result
diff -Nru cython-0.26.1/docs/examples/userguide/numpy_tutorial/compute_prange.pyx cython-0.29.14/docs/examples/userguide/numpy_tutorial/compute_prange.pyx
--- cython-0.26.1/docs/examples/userguide/numpy_tutorial/compute_prange.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/numpy_tutorial/compute_prange.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,53 @@
+# tag: openmp
+# You can ignore the previous line.
+# It's for internal testing of the cython documentation.
+
+# distutils: extra_compile_args=-fopenmp
+# distutils: extra_link_args=-fopenmp
+
+import numpy as np
+cimport cython
+from cython.parallel import prange
+
+ctypedef fused my_type:
+ int
+ double
+ long long
+
+
+# We declare our plain c function nogil
+cdef my_type clip(my_type a, my_type min_value, my_type max_value) nogil:
+ return min(max(a, min_value), max_value)
+
+
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def compute(my_type[:, ::1] array_1, my_type[:, ::1] array_2, my_type a, my_type b, my_type c):
+
+ cdef Py_ssize_t x_max = array_1.shape[0]
+ cdef Py_ssize_t y_max = array_1.shape[1]
+
+ assert tuple(array_1.shape) == tuple(array_2.shape)
+
+ if my_type is int:
+ dtype = np.intc
+ elif my_type is double:
+ dtype = np.double
+ elif my_type is cython.longlong:
+ dtype = np.longlong
+
+ result = np.zeros((x_max, y_max), dtype=dtype)
+ cdef my_type[:, ::1] result_view = result
+
+ cdef my_type tmp
+ cdef Py_ssize_t x, y
+
+ # We use prange here.
+ for x in prange(x_max, nogil=True):
+ for y in range(y_max):
+
+ tmp = clip(array_1[x, y], 2, 10)
+ tmp = tmp * a + array_2[x, y] * b
+ result_view[x, y] = tmp + c
+
+ return result
diff -Nru cython-0.26.1/docs/examples/userguide/numpy_tutorial/compute_py.py cython-0.29.14/docs/examples/userguide/numpy_tutorial/compute_py.py
--- cython-0.26.1/docs/examples/userguide/numpy_tutorial/compute_py.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/numpy_tutorial/compute_py.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,28 @@
+import numpy as np
+
+
+def clip(a, min_value, max_value):
+ return min(max(a, min_value), max_value)
+
+
+def compute(array_1, array_2, a, b, c):
+ """
+ This function must implement the formula
+ np.clip(array_1, 2, 10) * a + array_2 * b + c
+
+ array_1 and array_2 are 2D.
+ """
+ x_max = array_1.shape[0]
+ y_max = array_1.shape[1]
+
+ assert array_1.shape == array_2.shape
+
+ result = np.zeros((x_max, y_max), dtype=array_1.dtype)
+
+ for x in range(x_max):
+ for y in range(y_max):
+ tmp = clip(array_1[x, y], 2, 10)
+ tmp = tmp * a + array_2[x, y] * b
+ result[x, y] = tmp + c
+
+ return result
diff -Nru cython-0.26.1/docs/examples/userguide/numpy_tutorial/compute_typed.pyx cython-0.29.14/docs/examples/userguide/numpy_tutorial/compute_typed.pyx
--- cython-0.26.1/docs/examples/userguide/numpy_tutorial/compute_typed.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/numpy_tutorial/compute_typed.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,50 @@
+import numpy as np
+
+# We now need to fix a datatype for our arrays. I've used the variable
+# DTYPE for this, which is assigned to the usual NumPy runtime
+# type info object.
+DTYPE = np.intc
+
+# cdef means here that this function is a plain C function (so faster).
+# To get all the benefits, we type the arguments and the return value.
+cdef int clip(int a, int min_value, int max_value):
+ return min(max(a, min_value), max_value)
+
+
+def compute(array_1, array_2, int a, int b, int c):
+
+ # The "cdef" keyword is also used within functions to type variables. It
+ # can only be used at the top indentation level (there are non-trivial
+ # problems with allowing them in other places, though we'd love to see
+ # good and thought out proposals for it).
+ cdef Py_ssize_t x_max = array_1.shape[0]
+ cdef Py_ssize_t y_max = array_1.shape[1]
+
+ assert array_1.shape == array_2.shape
+ assert array_1.dtype == DTYPE
+ assert array_2.dtype == DTYPE
+
+ result = np.zeros((x_max, y_max), dtype=DTYPE)
+
+ # It is very important to type ALL your variables. You do not get any
+ # warnings if not, only much slower code (they are implicitly typed as
+ # Python objects).
+ # For the "tmp" variable, we want to use the same data type as is
+ # stored in the array, so we use int because it correspond to np.intc.
+ # NB! An important side-effect of this is that if "tmp" overflows its
+ # datatype size, it will simply wrap around like in C, rather than raise
+ # an error like in Python.
+
+ cdef int tmp
+
+ # Py_ssize_t is the proper C type for Python array indices.
+ cdef Py_ssize_t x, y
+
+ for x in range(x_max):
+ for y in range(y_max):
+
+ tmp = clip(array_1[x, y], 2, 10)
+ tmp = tmp * a + array_2[x, y] * b
+ result[x, y] = tmp + c
+
+ return result
diff -Nru cython-0.26.1/docs/examples/userguide/numpy_tutorial/numpy_and_cython.ipynb cython-0.29.14/docs/examples/userguide/numpy_tutorial/numpy_and_cython.ipynb
--- cython-0.26.1/docs/examples/userguide/numpy_tutorial/numpy_and_cython.ipynb 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/numpy_tutorial/numpy_and_cython.ipynb 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,845 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Cython for NumPy users"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "To follow the tutorial, see https://cython.readthedocs.io/en/latest/src/userguide/numpy_tutorial.html"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "WARNING: Disabling color, you really want to install colorlog.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "0.29a0\n"
+ ]
+ }
+ ],
+ "source": [
+ "from __future__ import print_function\n",
+ "%load_ext cython\n",
+ "import Cython\n",
+ "print(Cython.__version__)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import numpy as np\n",
+ "array_1 = np.random.uniform(0, 1000, size=(3000, 2000)).astype(np.intc)\n",
+ "array_2 = np.random.uniform(0, 1000, size=(3000, 2000)).astype(np.intc)\n",
+ "a = 4\n",
+ "b = 3\n",
+ "c = 9"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### The first Cython program"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "##### Numpy version"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def compute_np(array_1, array_2, a, b, c):\n",
+ " return np.clip(array_1, 2, 10) * a + array_2 * b + c"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "103 ms ± 2.68 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n"
+ ]
+ }
+ ],
+ "source": [
+ "timeit_result = %timeit -o compute_np(array_1, array_2, a, b, c)\n",
+ "np_time = timeit_result.average"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "np_result = compute_np(array_1, array_2, a, b, c)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "##### Pure Python version"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def clip(a, min_value, max_value):\n",
+ " return min(max(a, min_value), max_value)\n",
+ "\n",
+ "\n",
+ "def compute(array_1, array_2, a, b, c):\n",
+ " \"\"\"\n",
+ " This function must implement the formula\n",
+ " np.clip(array_1, 2, 10) * a + array_2 * b + c\n",
+ "\n",
+ " array_1 and array_2 are 2D.\n",
+ " \"\"\"\n",
+ " x_max = array_1.shape[0]\n",
+ " y_max = array_1.shape[1]\n",
+ " \n",
+ " assert array_1.shape == array_2.shape\n",
+ "\n",
+ " result = np.zeros((x_max, y_max), dtype=array_1.dtype)\n",
+ "\n",
+ " for x in range(x_max):\n",
+ " for y in range(y_max):\n",
+ "\n",
+ " tmp = clip(array_1[x, y], 2, 10)\n",
+ " tmp = tmp * a + array_2[x, y] * b\n",
+ " result[x, y] = tmp + c\n",
+ "\n",
+ " return result"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "assert np.all(compute(array_1, array_2, a, b, c) == np_result)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "1min 10s ± 844 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n"
+ ]
+ }
+ ],
+ "source": [
+ "timeit_result = %timeit -o compute(array_1, array_2, a, b, c)\n",
+ "py_time = timeit_result.average"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### We make a function to be able to easily compare timings with the NumPy version and the pure Python version."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def compare_time(current, reference, name):\n",
+ " ratio = reference/current\n",
+ " if ratio > 1:\n",
+ " word = \"faster\"\n",
+ " else:\n",
+ " ratio = 1 / ratio \n",
+ " word = \"slower\"\n",
+ " \n",
+ " print(\"We are\", \"{0:.1f}\".format(ratio), \"times\", word, \"than the\", name, \"version.\")\n",
+ "\n",
+ "def print_report(compute_function):\n",
+ " assert np.all(compute_function(array_1, array_2, a, b, c) == np_result)\n",
+ " timeit_result = %timeit -o compute_function(array_1, array_2, a, b, c)\n",
+ " run_time = timeit_result.average\n",
+ " compare_time(run_time, py_time, \"pure Python\")\n",
+ " compare_time(run_time, np_time, \"NumPy\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "##### Pure Python version compiled with Cython:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "%%cython -a\n",
+ "import numpy as np\n",
+ "\n",
+ "\n",
+ "def clip(a, min_value, max_value):\n",
+ " return min(max(a, min_value), max_value)\n",
+ "\n",
+ "\n",
+ "def compute(array_1, array_2, a, b, c):\n",
+ " \"\"\"\n",
+ " This function must implement the formula\n",
+ " np.clip(array_1, 2, 10) * a + array_2 * b + c\n",
+ "\n",
+ " array_1 and array_2 are 2D.\n",
+ " \"\"\"\n",
+ " x_max = array_1.shape[0]\n",
+ " y_max = array_1.shape[1]\n",
+ " \n",
+ " assert array_1.shape == array_2.shape\n",
+ "\n",
+ " result = np.zeros((x_max, y_max), dtype=array_1.dtype)\n",
+ "\n",
+ " for x in range(x_max):\n",
+ " for y in range(y_max):\n",
+ "\n",
+ " tmp = clip(array_1[x, y], 2, 10)\n",
+ " tmp = tmp * a + array_2[x, y] * b\n",
+ " result[x, y] = tmp + c\n",
+ "\n",
+ " return result"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "56.5 s ± 587 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n",
+ "We are 1.2 times faster than the pure Python version.\n",
+ "We are 546.0 times slower than the NumPy version.\n"
+ ]
+ }
+ ],
+ "source": [
+ "print_report(compute)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Adding types:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "%%cython -a\n",
+ "import numpy as np\n",
+ "\n",
+ "# We now need to fix a datatype for our arrays. I've used the variable\n",
+ "# DTYPE for this, which is assigned to the usual NumPy runtime\n",
+ "# type info object.\n",
+ "DTYPE = np.intc\n",
+ "\n",
+ "# cdef means here that this function is a plain C function (so faster).\n",
+ "# To get all the benefits, we type the arguments and the return value as int.\n",
+ "cdef int clip(int a, int min_value, int max_value):\n",
+ " return min(max(a, min_value), max_value)\n",
+ "\n",
+ "\n",
+ "def compute(array_1, array_2, int a, int b, int c):\n",
+ " \n",
+ " # The \"cdef\" keyword is also used within functions to type variables. It\n",
+ " # can only be used at the top indentation level (there are non-trivial\n",
+ " # problems with allowing them in other places, though we'd love to see\n",
+ " # good and thought out proposals for it).\n",
+ " cdef Py_ssize_t x_max = array_1.shape[0]\n",
+ " cdef Py_ssize_t y_max = array_1.shape[1]\n",
+ " \n",
+ " assert array_1.shape == array_2.shape\n",
+ " assert array_1.dtype == DTYPE\n",
+ " assert array_2.dtype == DTYPE\n",
+ "\n",
+ " result = np.zeros((x_max, y_max), dtype=DTYPE)\n",
+ " \n",
+ " # It is very important to type ALL your variables. You do not get any\n",
+ " # warnings if not, only much slower code (they are implicitly typed as\n",
+ " # Python objects).\n",
+ " # For the \"tmp\" variable, we want to use the same data type as is\n",
+ " # stored in the array, so we use int because it correspond to np.intc.\n",
+ " # NB! An important side-effect of this is that if \"tmp\" overflows its\n",
+ " # datatype size, it will simply wrap around like in C, rather than raise\n",
+ " # an error like in Python.\n",
+ "\n",
+ " cdef int tmp\n",
+ " cdef Py_ssize_t x, y\n",
+ "\n",
+ " for x in range(x_max):\n",
+ " for y in range(y_max):\n",
+ "\n",
+ " tmp = clip(array_1[x, y], 2, 10)\n",
+ " tmp = tmp * a + array_2[x, y] * b\n",
+ " result[x, y] = tmp + c\n",
+ "\n",
+ " return result"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "26.5 s ± 422 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n",
+ "We are 2.7 times faster than the pure Python version.\n",
+ "We are 256.2 times slower than the NumPy version.\n"
+ ]
+ }
+ ],
+ "source": [
+ "print_report(compute)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Efficient indexing with memoryviews:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "%%cython -a\n",
+ "import numpy as np\n",
+ "\n",
+ "DTYPE = np.intc\n",
+ "\n",
+ "\n",
+ "cdef int clip(int a, int min_value, int max_value):\n",
+ " return min(max(a, min_value), max_value)\n",
+ "\n",
+ "\n",
+ "def compute(int[:, :] array_1, int[:, :] array_2, int a, int b, int c):\n",
+ " \n",
+ " cdef Py_ssize_t x_max = array_1.shape[0]\n",
+ " cdef Py_ssize_t y_max = array_1.shape[1]\n",
+ " \n",
+ " assert tuple(array_1.shape) == tuple(array_2.shape)\n",
+ "\n",
+ " result = np.zeros((x_max, y_max), dtype=DTYPE)\n",
+ " cdef int[:, :] result_view = result\n",
+ "\n",
+ " cdef int tmp\n",
+ " cdef Py_ssize_t x, y\n",
+ "\n",
+ " for x in range(x_max):\n",
+ " for y in range(y_max):\n",
+ "\n",
+ " tmp = clip(array_1[x, y], 2, 10)\n",
+ " tmp = tmp * a + array_2[x, y] * b\n",
+ " result_view[x, y] = tmp + c\n",
+ "\n",
+ " return result"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "22.9 ms ± 197 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)\n",
+ "We are 3081.0 times faster than the pure Python version.\n",
+ "We are 4.5 times faster than the NumPy version.\n"
+ ]
+ }
+ ],
+ "source": [
+ "print_report(compute)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Tuning indexing further:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "%%cython -a\n",
+ "import numpy as np\n",
+ "cimport cython\n",
+ "\n",
+ "DTYPE = np.intc\n",
+ "\n",
+ "\n",
+ "cdef int clip(int a, int min_value, int max_value):\n",
+ " return min(max(a, min_value), max_value)\n",
+ "\n",
+ "@cython.boundscheck(False)\n",
+ "@cython.wraparound(False)\n",
+ "def compute(int[:, :] array_1, int[:, :] array_2, int a, int b, int c):\n",
+ " \n",
+ " cdef Py_ssize_t x_max = array_1.shape[0]\n",
+ " cdef Py_ssize_t y_max = array_1.shape[1]\n",
+ " \n",
+ " assert tuple(array_1.shape) == tuple(array_2.shape)\n",
+ "\n",
+ " result = np.zeros((x_max, y_max), dtype=DTYPE)\n",
+ " cdef int[:, :] result_view = result\n",
+ "\n",
+ " cdef int tmp\n",
+ " cdef Py_ssize_t x, y\n",
+ "\n",
+ " for x in range(x_max):\n",
+ " for y in range(y_max):\n",
+ "\n",
+ " tmp = clip(array_1[x, y], 2, 10)\n",
+ " tmp = tmp * a + array_2[x, y] * b\n",
+ " result_view[x, y] = tmp + c\n",
+ "\n",
+ " return result"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "16.8 ms ± 25.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n",
+ "We are 4200.7 times faster than the pure Python version.\n",
+ "We are 6.2 times faster than the NumPy version.\n"
+ ]
+ }
+ ],
+ "source": [
+ "print_report(compute)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Declaring the NumPy arrays as contiguous."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%%cython\n",
+ "import numpy as np\n",
+ "cimport cython\n",
+ "\n",
+ "DTYPE = np.intc\n",
+ "\n",
+ "\n",
+ "cdef int clip(int a, int min_value, int max_value):\n",
+ " return min(max(a, min_value), max_value)\n",
+ "\n",
+ "\n",
+ "@cython.boundscheck(False)\n",
+ "@cython.wraparound(False)\n",
+ "def compute(int[:, ::1] array_1, int[:, ::1] array_2, int a, int b, int c):\n",
+ " \n",
+ " cdef Py_ssize_t x_max = array_1.shape[0]\n",
+ " cdef Py_ssize_t y_max = array_1.shape[1]\n",
+ " \n",
+ " assert tuple(array_1.shape) == tuple(array_2.shape)\n",
+ "\n",
+ " result = np.zeros((x_max, y_max), dtype=DTYPE)\n",
+ " cdef int[:, ::1] result_view = result\n",
+ "\n",
+ " cdef int tmp\n",
+ " cdef Py_ssize_t x, y\n",
+ "\n",
+ " for x in range(x_max):\n",
+ " for y in range(y_max):\n",
+ "\n",
+ " tmp = clip(array_1[x, y], 2, 10)\n",
+ " tmp = tmp * a + array_2[x, y] * b\n",
+ " result_view[x, y] = tmp + c\n",
+ "\n",
+ " return result"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "11.1 ms ± 30.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n",
+ "We are 6350.9 times faster than the pure Python version.\n",
+ "We are 9.3 times faster than the NumPy version.\n"
+ ]
+ }
+ ],
+ "source": [
+ "print_report(compute)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Making the function cleaner"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%%cython -a\n",
+ "# cython: infer_types=True\n",
+ "import numpy as np\n",
+ "cimport cython\n",
+ "\n",
+ "DTYPE = np.intc\n",
+ "\n",
+ "\n",
+ "cdef int clip(int a, int min_value, int max_value):\n",
+ " return min(max(a, min_value), max_value)\n",
+ "\n",
+ "\n",
+ "@cython.boundscheck(False)\n",
+ "@cython.wraparound(False)\n",
+ "def compute(int[:, ::1] array_1, int[:, ::1] array_2, int a, int b, int c):\n",
+ " \n",
+ " x_max = array_1.shape[0]\n",
+ " y_max = array_1.shape[1]\n",
+ " \n",
+ " assert tuple(array_1.shape) == tuple(array_2.shape)\n",
+ "\n",
+ " result = np.zeros((x_max, y_max), dtype=DTYPE)\n",
+ " cdef int[:, ::1] result_view = result\n",
+ "\n",
+ " cdef int tmp\n",
+ " cdef Py_ssize_t x, y\n",
+ "\n",
+ " for x in range(x_max):\n",
+ " for y in range(y_max):\n",
+ "\n",
+ " tmp = clip(array_1[x, y], 2, 10)\n",
+ " tmp = tmp * a + array_2[x, y] * b\n",
+ " result_view[x, y] = tmp + c\n",
+ "\n",
+ " return result"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 38,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "11.5 ms ± 261 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n",
+ "We are 6131.2 times faster than the pure Python version.\n",
+ "We are 9.0 times faster than the NumPy version.\n"
+ ]
+ }
+ ],
+ "source": [
+ "print_report(compute)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### More generic code:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 43,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "%%cython\n",
+ "# cython: infer_types=True\n",
+ "import numpy as np\n",
+ "cimport cython\n",
+ "\n",
+ "ctypedef fused my_type:\n",
+ " int\n",
+ " double\n",
+ " long long\n",
+ "\n",
+ "\n",
+ "cdef my_type clip(my_type a, my_type min_value, my_type max_value):\n",
+ " return min(max(a, min_value), max_value)\n",
+ "\n",
+ "\n",
+ "@cython.boundscheck(False)\n",
+ "@cython.wraparound(False)\n",
+ "def compute(my_type[:, ::1] array_1, my_type[:, ::1] array_2, my_type a, my_type b, my_type c):\n",
+ " \n",
+ " x_max = array_1.shape[0]\n",
+ " y_max = array_1.shape[1]\n",
+ " \n",
+ " assert tuple(array_1.shape) == tuple(array_2.shape)\n",
+ " \n",
+ " if my_type is int:\n",
+ " dtype = np.intc\n",
+ " elif my_type is double:\n",
+ " dtype = np.double\n",
+ " elif my_type is cython.longlong:\n",
+ " dtype = np.double\n",
+ " \n",
+ " result = np.zeros((x_max, y_max), dtype=dtype)\n",
+ " cdef my_type[:, ::1] result_view = result\n",
+ "\n",
+ " cdef my_type tmp\n",
+ " cdef Py_ssize_t x, y\n",
+ "\n",
+ " for x in range(x_max):\n",
+ " for y in range(y_max):\n",
+ "\n",
+ " tmp = clip(array_1[x, y], 2, 10)\n",
+ " tmp = tmp * a + array_2[x, y] * b\n",
+ " result_view[x, y] = tmp + c\n",
+ "\n",
+ " return result"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 45,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "arr_1_float = array_1.astype(np.float64)\n",
+ "arr_2_float = array_2.astype(np.float64)\n",
+ "\n",
+ "float_cython_result = compute(arr_1_float, arr_2_float, a, b, c)\n",
+ "float_numpy_result = compute_np(arr_1_float, arr_2_float, a, b, c)\n",
+ "\n",
+ "assert np.all(float_cython_result == float_numpy_result)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 46,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "11.5 ms ± 258 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n",
+ "We are 6153.1 times faster than the pure Python version.\n",
+ "We are 9.0 times faster than the NumPy version.\n"
+ ]
+ }
+ ],
+ "source": [
+ "print_report(compute)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Using multiple threads"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 56,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "%%cython --force\n",
+ "# distutils: extra_compile_args=-fopenmp\n",
+ "# distutils: extra_link_args=-fopenmp\n",
+ "import numpy as np\n",
+ "cimport cython\n",
+ "from cython.parallel import prange\n",
+ "\n",
+ "ctypedef fused my_type:\n",
+ " int\n",
+ " double\n",
+ " long long\n",
+ "\n",
+ "\n",
+ "# We declare our plain c function nogil\n",
+ "cdef my_type clip(my_type a, my_type min_value, my_type max_value) nogil:\n",
+ " return min(max(a, min_value), max_value)\n",
+ "\n",
+ "\n",
+ "@cython.boundscheck(False)\n",
+ "@cython.wraparound(False)\n",
+ "def compute(my_type[:, ::1] array_1, my_type[:, ::1] array_2, my_type a, my_type b, my_type c):\n",
+ " \n",
+ " cdef Py_ssize_t x_max = array_1.shape[0]\n",
+ " cdef Py_ssize_t y_max = array_1.shape[1]\n",
+ " \n",
+ " assert tuple(array_1.shape) == tuple(array_2.shape)\n",
+ " \n",
+ " if my_type is int:\n",
+ " dtype = np.intc\n",
+ " elif my_type is double:\n",
+ " dtype = np.double\n",
+ " elif my_type is cython.longlong:\n",
+ " dtype = np.longlong\n",
+ " \n",
+ " result = np.zeros((x_max, y_max), dtype=dtype)\n",
+ " cdef my_type[:, ::1] result_view = result\n",
+ "\n",
+ " cdef my_type tmp\n",
+ " cdef Py_ssize_t x, y\n",
+ "\n",
+ " # We use prange here.\n",
+ " for x in prange(x_max, nogil=True):\n",
+ " for y in range(y_max):\n",
+ "\n",
+ " tmp = clip(array_1[x, y], 2, 10)\n",
+ " tmp = tmp * a + array_2[x, y] * b\n",
+ " result_view[x, y] = tmp + c\n",
+ "\n",
+ " return result"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 57,
+ "metadata": {
+ "scrolled": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "9.33 ms ± 412 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n",
+ "We are 7559.0 times faster than the pure Python version.\n",
+ "We are 11.1 times faster than the NumPy version.\n"
+ ]
+ }
+ ],
+ "source": [
+ "print_report(compute)"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.6.6"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff -Nru cython-0.26.1/docs/examples/userguide/parallelism/breaking_loop.pyx cython-0.29.14/docs/examples/userguide/parallelism/breaking_loop.pyx
--- cython-0.26.1/docs/examples/userguide/parallelism/breaking_loop.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/parallelism/breaking_loop.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,13 @@
+from cython.parallel import prange
+
+cdef int func(Py_ssize_t n):
+ cdef Py_ssize_t i
+
+ for i in prange(n, nogil=True):
+ if i == 8:
+ with gil:
+ raise Exception()
+ elif i == 4:
+ break
+ elif i == 2:
+ return i
diff -Nru cython-0.26.1/docs/examples/userguide/parallelism/cimport_openmp.pyx cython-0.29.14/docs/examples/userguide/parallelism/cimport_openmp.pyx
--- cython-0.26.1/docs/examples/userguide/parallelism/cimport_openmp.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/parallelism/cimport_openmp.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,13 @@
+# tag: openmp
+# You can ignore the previous line.
+# It's for internal testing of the Cython documentation.
+
+from cython.parallel cimport parallel
+cimport openmp
+
+cdef int num_threads
+
+openmp.omp_set_dynamic(1)
+with nogil, parallel():
+ num_threads = openmp.omp_get_num_threads()
+ # ...
diff -Nru cython-0.26.1/docs/examples/userguide/parallelism/setup.py cython-0.29.14/docs/examples/userguide/parallelism/setup.py
--- cython-0.26.1/docs/examples/userguide/parallelism/setup.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/parallelism/setup.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,17 @@
+from distutils.core import setup
+from distutils.extension import Extension
+from Cython.Build import cythonize
+
+ext_modules = [
+ Extension(
+ "hello",
+ ["hello.pyx"],
+ extra_compile_args=['-fopenmp'],
+ extra_link_args=['-fopenmp'],
+ )
+]
+
+setup(
+ name='hello-parallel-world',
+ ext_modules=cythonize(ext_modules),
+)
diff -Nru cython-0.26.1/docs/examples/userguide/parallelism/simple_sum.pyx cython-0.29.14/docs/examples/userguide/parallelism/simple_sum.pyx
--- cython-0.26.1/docs/examples/userguide/parallelism/simple_sum.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/parallelism/simple_sum.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,10 @@
+from cython.parallel import prange
+
+cdef int i
+cdef int n = 30
+cdef int sum = 0
+
+for i in prange(n, nogil=True):
+ sum += i
+
+print(sum)
diff -Nru cython-0.26.1/docs/examples/userguide/sharing_declarations/c_lunch.pxd cython-0.29.14/docs/examples/userguide/sharing_declarations/c_lunch.pxd
--- cython-0.26.1/docs/examples/userguide/sharing_declarations/c_lunch.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/sharing_declarations/c_lunch.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,2 @@
+cdef extern from "lunch.h":
+ void eject_tomato(float)
diff -Nru cython-0.26.1/docs/examples/userguide/sharing_declarations/dishes.pxd cython-0.29.14/docs/examples/userguide/sharing_declarations/dishes.pxd
--- cython-0.26.1/docs/examples/userguide/sharing_declarations/dishes.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/sharing_declarations/dishes.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,6 @@
+cdef enum otherstuff:
+ sausage, eggs, lettuce
+
+cdef struct spamdish:
+ int oz_of_spam
+ otherstuff filler
diff -Nru cython-0.26.1/docs/examples/userguide/sharing_declarations/landscaping.pyx cython-0.29.14/docs/examples/userguide/sharing_declarations/landscaping.pyx
--- cython-0.26.1/docs/examples/userguide/sharing_declarations/landscaping.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/sharing_declarations/landscaping.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,7 @@
+cimport shrubbing
+import shrubbing
+
+def main():
+ cdef shrubbing.Shrubbery sh
+ sh = shrubbing.standard_shrubbery()
+ print("Shrubbery size is", sh.width, 'x', sh.length)
diff -Nru cython-0.26.1/docs/examples/userguide/sharing_declarations/lunch.h cython-0.29.14/docs/examples/userguide/sharing_declarations/lunch.h
--- cython-0.26.1/docs/examples/userguide/sharing_declarations/lunch.h 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/sharing_declarations/lunch.h 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1 @@
+void eject_tomato(float speed);
diff -Nru cython-0.26.1/docs/examples/userguide/sharing_declarations/lunch.pyx cython-0.29.14/docs/examples/userguide/sharing_declarations/lunch.pyx
--- cython-0.26.1/docs/examples/userguide/sharing_declarations/lunch.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/sharing_declarations/lunch.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,4 @@
+cimport c_lunch
+
+def eject_tomato(float speed):
+ c_lunch.eject_tomato(speed)
diff -Nru cython-0.26.1/docs/examples/userguide/sharing_declarations/restaurant.pyx cython-0.29.14/docs/examples/userguide/sharing_declarations/restaurant.pyx
--- cython-0.26.1/docs/examples/userguide/sharing_declarations/restaurant.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/sharing_declarations/restaurant.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,12 @@
+from __future__ import print_function
+cimport dishes
+from dishes cimport spamdish
+
+cdef void prepare(spamdish *d):
+ d.oz_of_spam = 42
+ d.filler = dishes.sausage
+
+def serve():
+ cdef spamdish d
+ prepare(&d)
+ print(f'{d.oz_of_spam} oz spam, filler no. {d.filler}')
diff -Nru cython-0.26.1/docs/examples/userguide/sharing_declarations/setup.py cython-0.29.14/docs/examples/userguide/sharing_declarations/setup.py
--- cython-0.26.1/docs/examples/userguide/sharing_declarations/setup.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/sharing_declarations/setup.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,4 @@
+from distutils.core import setup
+from Cython.Build import cythonize
+
+setup(ext_modules=cythonize(["landscaping.pyx", "shrubbing.pyx"]))
diff -Nru cython-0.26.1/docs/examples/userguide/sharing_declarations/shrubbing.pxd cython-0.29.14/docs/examples/userguide/sharing_declarations/shrubbing.pxd
--- cython-0.26.1/docs/examples/userguide/sharing_declarations/shrubbing.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/sharing_declarations/shrubbing.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,3 @@
+cdef class Shrubbery:
+ cdef int width
+ cdef int length
diff -Nru cython-0.26.1/docs/examples/userguide/sharing_declarations/shrubbing.pyx cython-0.29.14/docs/examples/userguide/sharing_declarations/shrubbing.pyx
--- cython-0.26.1/docs/examples/userguide/sharing_declarations/shrubbing.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/sharing_declarations/shrubbing.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,7 @@
+cdef class Shrubbery:
+ def __cinit__(self, int w, int l):
+ self.width = w
+ self.length = l
+
+def standard_shrubbery():
+ return Shrubbery(3, 7)
diff -Nru cython-0.26.1/docs/examples/userguide/sharing_declarations/spammery.pyx cython-0.29.14/docs/examples/userguide/sharing_declarations/spammery.pyx
--- cython-0.26.1/docs/examples/userguide/sharing_declarations/spammery.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/sharing_declarations/spammery.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,11 @@
+from __future__ import print_function
+
+from volume cimport cube
+
+def menu(description, size):
+ print(description, ":", cube(size),
+ "cubic metres of spam")
+
+menu("Entree", 1)
+menu("Main course", 3)
+menu("Dessert", 2)
diff -Nru cython-0.26.1/docs/examples/userguide/sharing_declarations/volume.pxd cython-0.29.14/docs/examples/userguide/sharing_declarations/volume.pxd
--- cython-0.26.1/docs/examples/userguide/sharing_declarations/volume.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/sharing_declarations/volume.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1 @@
+cdef float cube(float)
diff -Nru cython-0.26.1/docs/examples/userguide/sharing_declarations/volume.pyx cython-0.29.14/docs/examples/userguide/sharing_declarations/volume.pyx
--- cython-0.26.1/docs/examples/userguide/sharing_declarations/volume.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/sharing_declarations/volume.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,2 @@
+cdef float cube(float x):
+ return x * x * x
diff -Nru cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/cython_usage.pyx cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/cython_usage.pyx
--- cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/cython_usage.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/cython_usage.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,12 @@
+# distutils: language = c++
+
+from Rectangle cimport Rectangle
+
+def main():
+ rec_ptr = new Rectangle(1, 2, 3, 4) # Instantiate a Rectangle object on the heap
+ try:
+ rec_area = rec_ptr.getArea()
+ finally:
+ del rec_ptr # delete heap allocated object
+
+ cdef Rectangle rec_stack # Instantiate a Rectangle object on the stack
diff -Nru cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/function_templates.pyx cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/function_templates.pyx
--- cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/function_templates.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/function_templates.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,7 @@
+# distutils: language = c++
+
+cdef extern from "" namespace "std":
+ T max[T](T a, T b)
+
+print(max[long](3, 4))
+print(max(1.5, 2.5)) # simple template argument deduction
diff -Nru cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/iterate.pyx cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/iterate.pyx
--- cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/iterate.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/iterate.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,12 @@
+# distutils: language = c++
+
+from libcpp.vector cimport vector
+
+def main():
+ cdef vector[int] v = [4, 6, 5, 10, 3]
+
+ cdef int value
+ for value in v:
+ print(value)
+
+ return [x*x for x in v if x % 2 == 0]
diff -Nru cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/nested_class.pyx cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/nested_class.pyx
--- cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/nested_class.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/nested_class.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,17 @@
+# distutils: language = c++
+
+cdef extern from "" namespace "std":
+ cdef cppclass vector[T]:
+ cppclass iterator:
+ T operator*()
+ iterator operator++()
+ bint operator==(iterator)
+ bint operator!=(iterator)
+ vector()
+ void push_back(T&)
+ T& operator[](int)
+ T& at(int)
+ iterator begin()
+ iterator end()
+
+cdef vector[int].iterator iter #iter is declared as being of type vector::iterator
diff -Nru cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/python_to_cpp.pyx cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/python_to_cpp.pyx
--- cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/python_to_cpp.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/python_to_cpp.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,19 @@
+# distutils: language = c++
+
+from libcpp.string cimport string
+from libcpp.vector cimport vector
+
+py_bytes_object = b'The knights who say ni'
+py_unicode_object = u'Those who hear them seldom live to tell the tale.'
+
+cdef string s = py_bytes_object
+print(s) # b'The knights who say ni'
+
+cdef string cpp_string = py_unicode_object.encode('utf-8')
+print(cpp_string) # b'Those who hear them seldom live to tell the tale.'
+
+cdef vector[int] vect = range(1, 10, 2)
+print(vect) # [1, 3, 5, 7, 9]
+
+cdef vector[string] cpp_strings = b'It is a good shrubbery'.split()
+print(cpp_strings[1]) # b'is'
diff -Nru cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/Rectangle.cpp cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/Rectangle.cpp
--- cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/Rectangle.cpp 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/Rectangle.cpp 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,40 @@
+#include
+#include "Rectangle.h"
+
+namespace shapes {
+
+ // Default constructor
+ Rectangle::Rectangle () {}
+
+ // Overloaded constructor
+ Rectangle::Rectangle (int x0, int y0, int x1, int y1) {
+ this->x0 = x0;
+ this->y0 = y0;
+ this->x1 = x1;
+ this->y1 = y1;
+ }
+
+ // Destructor
+ Rectangle::~Rectangle () {}
+
+ // Return the area of the rectangle
+ int Rectangle::getArea () {
+ return (this->x1 - this->x0) * (this->y1 - this->y0);
+ }
+
+ // Get the size of the rectangle.
+ // Put the size in the pointer args
+ void Rectangle::getSize (int *width, int *height) {
+ (*width) = x1 - x0;
+ (*height) = y1 - y0;
+ }
+
+ // Move the rectangle by dx dy
+ void Rectangle::move (int dx, int dy) {
+ this->x0 += dx;
+ this->y0 += dy;
+ this->x1 += dx;
+ this->y1 += dy;
+ }
+}
+
diff -Nru cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/Rectangle.h cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/Rectangle.h
--- cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/Rectangle.h 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/Rectangle.h 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,17 @@
+#ifndef RECTANGLE_H
+#define RECTANGLE_H
+
+namespace shapes {
+ class Rectangle {
+ public:
+ int x0, y0, x1, y1;
+ Rectangle();
+ Rectangle(int x0, int y0, int x1, int y1);
+ ~Rectangle();
+ int getArea();
+ void getSize(int* width, int* height);
+ void move(int dx, int dy);
+ };
+}
+
+#endif
diff -Nru cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/Rectangle.pxd cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/Rectangle.pxd
--- cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/Rectangle.pxd 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/Rectangle.pxd 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,12 @@
+cdef extern from "Rectangle.cpp":
+ pass
+
+# Decalre the class with cdef
+cdef extern from "Rectangle.h" namespace "shapes":
+ cdef cppclass Rectangle:
+ Rectangle() except +
+ Rectangle(int, int, int, int) except +
+ int x0, y0, x1, y1
+ int getArea()
+ void getSize(int* width, int* height)
+ void move(int, int)
diff -Nru cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/rect_ptr.pyx cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/rect_ptr.pyx
--- cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/rect_ptr.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/rect_ptr.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,12 @@
+# distutils: language = c++
+
+from Rectangle cimport Rectangle
+
+cdef class PyRectangle:
+ cdef Rectangle*c_rect # hold a pointer to the C++ instance which we're wrapping
+
+ def __cinit__(self, int x0, int y0, int x1, int y1):
+ self.c_rect = new Rectangle(x0, y0, x1, y1)
+
+ def __dealloc__(self):
+ del self.c_rect
diff -Nru cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/rect.pyx cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/rect.pyx
--- cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/rect.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/rect.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,23 @@
+# distutils: language = c++
+
+from Rectangle cimport Rectangle
+
+# Create a Cython extension type which holds a C++ instance
+# as an attribute and create a bunch of forwarding methods
+# Python extension type.
+cdef class PyRectangle:
+ cdef Rectangle c_rect # Hold a C++ instance which we're wrapping
+
+ def __cinit__(self, int x0, int y0, int x1, int y1):
+ self.c_rect = Rectangle(x0, y0, x1, y1)
+
+ def get_area(self):
+ return self.c_rect.getArea()
+
+ def get_size(self):
+ cdef int width, height
+ self.c_rect.getSize(&width, &height)
+ return width, height
+
+ def move(self, dx, dy):
+ self.c_rect.move(dx, dy)
diff -Nru cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/rect_with_attributes.pyx cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/rect_with_attributes.pyx
--- cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/rect_with_attributes.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/rect_with_attributes.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,52 @@
+# distutils: language = c++
+
+from Rectangle cimport Rectangle
+
+cdef class PyRectangle:
+ cdef Rectangle c_rect
+
+ def __cinit__(self, int x0, int y0, int x1, int y1):
+ self.c_rect = Rectangle(x0, y0, x1, y1)
+
+ def get_area(self):
+ return self.c_rect.getArea()
+
+ def get_size(self):
+ cdef int width, height
+ self.c_rect.getSize(&width, &height)
+ return width, height
+
+ def move(self, dx, dy):
+ self.c_rect.move(dx, dy)
+
+ # Attribute access
+ @property
+ def x0(self):
+ return self.c_rect.x0
+ @x0.setter
+ def x0(self, x0):
+ self.c_rect.x0 = x0
+
+ # Attribute access
+ @property
+ def x1(self):
+ return self.c_rect.x1
+ @x1.setter
+ def x1(self, x1):
+ self.c_rect.x1 = x1
+
+ # Attribute access
+ @property
+ def y0(self):
+ return self.c_rect.y0
+ @y0.setter
+ def y0(self, y0):
+ self.c_rect.y0 = y0
+
+ # Attribute access
+ @property
+ def y1(self):
+ return self.c_rect.y1
+ @y1.setter
+ def y1(self, y1):
+ self.c_rect.y1 = y1
diff -Nru cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/setup.py cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/setup.py
--- cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/setup.py 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/setup.py 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,5 @@
+from distutils.core import setup
+
+from Cython.Build import cythonize
+
+setup(ext_modules=cythonize("rect.pyx"))
diff -Nru cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/templates.pyx cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/templates.pyx
--- cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/templates.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/templates.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,30 @@
+# distutils: language = c++
+
+# import dereference and increment operators
+from cython.operator cimport dereference as deref, preincrement as inc
+
+cdef extern from "" namespace "std":
+ cdef cppclass vector[T]:
+ cppclass iterator:
+ T operator*()
+ iterator operator++()
+ bint operator==(iterator)
+ bint operator!=(iterator)
+ vector()
+ void push_back(T&)
+ T& operator[](int)
+ T& at(int)
+ iterator begin()
+ iterator end()
+
+cdef vector[int] *v = new vector[int]()
+cdef int i
+for i in range(10):
+ v.push_back(i)
+
+cdef vector[int].iterator it = v.begin()
+while it != v.end():
+ print(deref(it))
+ inc(it)
+
+del v
diff -Nru cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/vector_demo.pyx cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/vector_demo.pyx
--- cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/vector_demo.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/vector_demo.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,15 @@
+# distutils: language = c++
+
+from libcpp.vector cimport vector
+
+cdef vector[int] vect
+cdef int i, x
+
+for i in range(10):
+ vect.push_back(i)
+
+for i in range(10):
+ print(vect[i])
+
+for x in vect:
+ print(x)
diff -Nru cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/wrapper_vector.pyx cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/wrapper_vector.pyx
--- cython-0.26.1/docs/examples/userguide/wrapping_CPlusPlus/wrapper_vector.pyx 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/examples/userguide/wrapping_CPlusPlus/wrapper_vector.pyx 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,17 @@
+# distutils: language = c++
+
+from libcpp.vector cimport vector
+
+
+cdef class VectorStack:
+ cdef vector[int] v
+
+ def push(self, x):
+ self.v.push_back(x)
+
+ def pop(self):
+ if self.v.empty():
+ raise IndexError()
+ x = self.v.back()
+ self.v.pop_back()
+ return x
diff -Nru cython-0.26.1/docs/index.rst cython-0.29.14/docs/index.rst
--- cython-0.26.1/docs/index.rst 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/docs/index.rst 2018-11-24 09:20:06.000000000 +0000
@@ -10,4 +10,4 @@
src/quickstart/index
src/tutorial/index
src/userguide/index
- src/reference/index
+ src/changes
diff -Nru cython-0.26.1/docs/make.bat cython-0.29.14/docs/make.bat
--- cython-0.26.1/docs/make.bat 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/make.bat 2018-11-24 09:20:06.000000000 +0000
@@ -0,0 +1,242 @@
+@ECHO OFF
+
+REM Command file for Sphinx documentation
+
+if "%SPHINXBUILD%" == "" (
+ set SPHINXBUILD=sphinx-build
+)
+set BUILDDIR=build
+set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
+set I18NSPHINXOPTS=%SPHINXOPTS% .
+if NOT "%PAPER%" == "" (
+ set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
+ set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
+)
+
+if "%1" == "" goto help
+
+if "%1" == "help" (
+ :help
+ echo.Please use `make ^` where ^ is one of
+ echo. html to make standalone HTML files
+ echo. dirhtml to make HTML files named index.html in directories
+ echo. singlehtml to make a single large HTML file
+ echo. pickle to make pickle files
+ echo. json to make JSON files
+ echo. htmlhelp to make HTML files and a HTML help project
+ echo. qthelp to make HTML files and a qthelp project
+ echo. devhelp to make HTML files and a Devhelp project
+ echo. epub to make an epub
+ echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
+ echo. text to make text files
+ echo. man to make manual pages
+ echo. texinfo to make Texinfo files
+ echo. gettext to make PO message catalogs
+ echo. changes to make an overview over all changed/added/deprecated items
+ echo. xml to make Docutils-native XML files
+ echo. pseudoxml to make pseudoxml-XML files for display purposes
+ echo. linkcheck to check all external links for integrity
+ echo. doctest to run all doctests embedded in the documentation if enabled
+ goto end
+)
+
+if "%1" == "clean" (
+ for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
+ del /q /s %BUILDDIR%\*
+ goto end
+)
+
+
+%SPHINXBUILD% 2> nul
+if errorlevel 9009 (
+ echo.
+ echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
+ echo.installed, then set the SPHINXBUILD environment variable to point
+ echo.to the full path of the 'sphinx-build' executable. Alternatively you
+ echo.may add the Sphinx directory to PATH.
+ echo.
+ echo.If you don't have Sphinx installed, grab it from
+ echo.http://sphinx-doc.org/
+ exit /b 1
+)
+
+if "%1" == "html" (
+ %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The HTML pages are in %BUILDDIR%/html.
+ goto end
+)
+
+if "%1" == "dirhtml" (
+ %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
+ goto end
+)
+
+if "%1" == "singlehtml" (
+ %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
+ goto end
+)
+
+if "%1" == "pickle" (
+ %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished; now you can process the pickle files.
+ goto end
+)
+
+if "%1" == "json" (
+ %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished; now you can process the JSON files.
+ goto end
+)
+
+if "%1" == "htmlhelp" (
+ %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished; now you can run HTML Help Workshop with the ^
+.hhp project file in %BUILDDIR%/htmlhelp.
+ goto end
+)
+
+if "%1" == "qthelp" (
+ %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished; now you can run "qcollectiongenerator" with the ^
+.qhcp project file in %BUILDDIR%/qthelp, like this:
+ echo.^> qcollectiongenerator %BUILDDIR%\qthelp\Sphinx-Gallery.qhcp
+ echo.To view the help file:
+ echo.^> assistant -collectionFile %BUILDDIR%\qthelp\Sphinx-Gallery.ghc
+ goto end
+)
+
+if "%1" == "devhelp" (
+ %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished.
+ goto end
+)
+
+if "%1" == "epub" (
+ %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The epub file is in %BUILDDIR%/epub.
+ goto end
+)
+
+if "%1" == "latex" (
+ %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
+ goto end
+)
+
+if "%1" == "latexpdf" (
+ %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
+ cd %BUILDDIR%/latex
+ make all-pdf
+ cd %BUILDDIR%/..
+ echo.
+ echo.Build finished; the PDF files are in %BUILDDIR%/latex.
+ goto end
+)
+
+if "%1" == "latexpdfja" (
+ %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
+ cd %BUILDDIR%/latex
+ make all-pdf-ja
+ cd %BUILDDIR%/..
+ echo.
+ echo.Build finished; the PDF files are in %BUILDDIR%/latex.
+ goto end
+)
+
+if "%1" == "text" (
+ %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The text files are in %BUILDDIR%/text.
+ goto end
+)
+
+if "%1" == "man" (
+ %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The manual pages are in %BUILDDIR%/man.
+ goto end
+)
+
+if "%1" == "texinfo" (
+ %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
+ goto end
+)
+
+if "%1" == "gettext" (
+ %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
+ goto end
+)
+
+if "%1" == "changes" (
+ %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.The overview file is in %BUILDDIR%/changes.
+ goto end
+)
+
+if "%1" == "linkcheck" (
+ %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Link check complete; look for any errors in the above output ^
+or in %BUILDDIR%/linkcheck/output.txt.
+ goto end
+)
+
+if "%1" == "doctest" (
+ %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Testing of doctests in the sources finished, look at the ^
+results in %BUILDDIR%/doctest/output.txt.
+ goto end
+)
+
+if "%1" == "xml" (
+ %SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The XML files are in %BUILDDIR%/xml.
+ goto end
+)
+
+if "%1" == "pseudoxml" (
+ %SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
+ if errorlevel 1 exit /b 1
+ echo.
+ echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
+ goto end
+)
+
+:end
diff -Nru cython-0.26.1/docs/README cython-0.29.14/docs/README
--- cython-0.26.1/docs/README 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/docs/README 2018-11-24 09:20:06.000000000 +0000
@@ -1,8 +1,16 @@
-Cython's entire documentation suite is currently being overhauled.
+Welcome to Cython's documentation.
-For the time being, I'll use this page to post notes.
+To build the documentation on Linux, you need Make and Sphinx installed on your system. Then execute::
-The previous Cython documentation files are hosted at
+ make html
+
+On windows systems, you only need Sphinx. Open PowerShell and type::
+
+ ./make.bat html
+
+You can then see the documentation by opening in a browser ``cython/docs/build/html/index.html``.
+
+The current Cython documentation files are hosted at
https://cython.readthedocs.io/en/latest/
diff -Nru cython-0.26.1/docs/src/changes.rst cython-0.29.14/docs/src/changes.rst
--- cython-0.26.1/docs/src/changes.rst 1970-01-01 00:00:00.000000000 +0000
+++ cython-0.29.14/docs/src/changes.rst 2018-09-22 14:18:56.000000000 +0000
@@ -0,0 +1 @@
+.. include:: ../../CHANGES.rst
diff -Nru cython-0.26.1/docs/src/quickstart/build.rst cython-0.29.14/docs/src/quickstart/build.rst
--- cython-0.26.1/docs/src/quickstart/build.rst 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/docs/src/quickstart/build.rst 2018-11-24 09:20:06.000000000 +0000
@@ -4,48 +4,58 @@
Cython code must, unlike Python, be compiled. This happens in two stages:
- A ``.pyx`` file is compiled by Cython to a ``.c`` file, containing
- the code of a Python extension module
+ the code of a Python extension module.
- The ``.c`` file is compiled by a C compiler to
a ``.so`` file (or ``.pyd`` on Windows) which can be
``import``-ed directly into a Python session.
+ Distutils or setuptools take care of this part.
+ Although Cython can call them for you in certain cases.
+
+To understand fully the Cython + distutils/setuptools build process,
+one may want to read more about
+`distributing Python modules `_.
There are several ways to build Cython code:
- - Write a distutils ``setup.py``.
- - Use ``pyximport``, importing Cython ``.pyx`` files as if they
+ - Write a distutils/setuptools ``setup.py``. This is the normal and recommended way.
+ - Use :ref:`Pyximport`, importing Cython ``.pyx`` files as if they
were ``.py`` files (using distutils to compile and build in the background).
+ This method is easier than writing a ``setup.py``, but is not very flexible.
+ So you'll need to write a ``setup.py`` if, for example, you need certain compilations options.
- Run the ``cython`` command-line utility manually to produce the ``.c`` file
from the ``.pyx`` file, then manually compiling the ``.c`` file into a shared
object library or DLL suitable for import from Python.
(These manual steps are mostly for debugging and experimentation.)
- Use the [Jupyter]_ notebook or the [Sage]_ notebook,
both of which allow Cython code inline.
+ This is the easiest way to get started writing Cython code and running it.
-Currently, distutils is the most common way Cython files are built and distributed. The other methods are described in more detail in the :ref:`compilation` section of the reference manual.
+Currently, using distutils or setuptools is the most common way Cython files are built and distributed.
+The other methods are described in more detail in the :ref:`compilation` section of the reference manual.
Building a Cython module using distutils
----------------------------------------
-Imagine a simple "hello world" script in a file ``hello.pyx``::
-
- def say_hello_to(name):
- print("Hello %s!" % name)
+Imagine a simple "hello world" script in a file ``hello.pyx``:
-The following could be a corresponding ``setup.py`` script::
+.. literalinclude:: ../../examples/quickstart/build/hello.pyx
- from distutils.core import setup
- from Cython.Build import cythonize
+The following could be a corresponding ``setup.py`` script:
- setup(
- name = 'Hello world app',
- ext_modules = cythonize("hello.pyx"),
- )
+.. literalinclude:: ../../examples/quickstart/build/setup.py
To build, run ``python setup.py build_ext --inplace``. Then simply
start a Python session and do ``from hello import say_hello_to`` and
use the imported function as you see fit.
+One caveat if you use setuptools instead of distutils, the default
+action when running ``python setup.py install`` is to create a zipped
+``egg`` file which will not work with ``cimport`` for ``pxd`` files
+when you try to use them from a dependent package.
+To prevent this, include ``zip_safe=False`` in the arguments to ``setup()``.
+
+.. _jupyter-notebook:
Using the Jupyter notebook
--------------------------
@@ -59,8 +69,8 @@
(venv)$ pip install jupyter
(venv)$ jupyter notebook
-To enable support for Cython compilation, install Cython and load the
-``Cython`` extension from within the Jupyter notebook::
+To enable support for Cython compilation, install Cython as described in :ref:`the installation guide`
+and load the ``Cython`` extension from within the Jupyter notebook::
%load_ext Cython
@@ -80,6 +90,8 @@
.. figure:: jupyter.png
+For more information about the arguments of the ``%%cython`` magic, see
+:ref:`Compiling with a Jupyter Notebook `.
Using the Sage notebook
-----------------------
@@ -93,4 +105,4 @@
.. [Jupyter] http://jupyter.org/
-.. [Sage] W. Stein et al., Sage Mathematics Software, http://sagemath.org
+.. [Sage] W. Stein et al., Sage Mathematics Software, http://www.sagemath.org/
diff -Nru cython-0.26.1/docs/src/quickstart/cythonize.rst cython-0.29.14/docs/src/quickstart/cythonize.rst
--- cython-0.26.1/docs/src/quickstart/cythonize.rst 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/docs/src/quickstart/cythonize.rst 2018-11-24 09:20:06.000000000 +0000
@@ -3,11 +3,11 @@
Cython is a Python compiler. This means that it can compile normal
Python code without changes (with a few obvious exceptions of some as-yet
-unsupported language features). However, for performance critical
-code, it is often helpful to add static type declarations, as they
-will allow Cython to step out of the dynamic nature of the Python code
-and generate simpler and faster C code - sometimes faster by orders of
-magnitude.
+unsupported language features, see :ref:`Cython limitations`).
+However, for performance critical code, it is often helpful to add
+static type declarations, as they will allow Cython to step out of the
+dynamic nature of the Python code and generate simpler and faster C code
+- sometimes faster by orders of magnitude.
It must be noted, however, that type declarations can make the source
code more verbose and thus less readable. It is therefore discouraged
@@ -30,35 +30,17 @@
Typing Variables
----------------
-Consider the following pure Python code::
+Consider the following pure Python code:
- def f(x):
- return x**2-x
-
- def integrate_f(a, b, N):
- s = 0
- dx = (b-a)/N
- for i in range(N):
- s += f(a+i*dx)
- return s * dx
+.. literalinclude:: ../../examples/quickstart/cythonize/integrate.py
Simply compiling this in Cython merely gives a 35% speedup. This is
better than nothing, but adding some static types can make a much larger
difference.
-With additional type declarations, this might look like::
-
- def f(double x):
- return x**2-x
+With additional type declarations, this might look like:
- def integrate_f(double a, double b, int N):
- cdef int i
- cdef double s, dx
- s = 0
- dx = (b-a)/N
- for i in range(N):
- s += f(a+i*dx)
- return s * dx
+.. literalinclude:: ../../examples/quickstart/cythonize/integrate_cy.pyx
Since the iterator variable ``i`` is typed with C semantics, the for-loop will be compiled
to pure C code. Typing ``a``, ``s`` and ``dx`` is important as they are involved
@@ -78,10 +60,9 @@
argument in order to pass it.
Therefore Cython provides a syntax for declaring a C-style function,
-the cdef keyword::
+the cdef keyword:
- cdef double f(double x) except? -2:
- return x**2-x
+.. literalinclude:: ../../examples/quickstart/cythonize/cdef_keyword.pyx
Some form of except-modifier should usually be added, otherwise Cython
will not be able to propagate exceptions raised in the function (or a
@@ -107,6 +88,8 @@
Speedup: 150 times over pure Python.
+.. _determining_where_to_add_types:
+
Determining where to add types
------------------------------
@@ -146,4 +129,10 @@
*integer types used in arithmetic expressions*, as Cython is unable to ensure
that an overflow would not occur (and so falls back to ``object`` in case
Python's bignums are needed). To allow inference of C integer types, set the
-``infer_types`` :ref:`directive ` to ``True``.
+``infer_types`` :ref:`directive ` to ``True``. This directive
+does a work similar to the ``auto`` keyword in C++ for the readers who are familiar
+with this language feature. It can be of great help to cut down on the need to type
+everything, but it also can lead to surprises. Especially if one isn't familiar with
+arithmetic expressions with c types. A quick overview of those
+can be found `here `_.
+
Binary files /tmp/tmp0lrW9P/aTeTJbw7H9/cython-0.26.1/docs/src/quickstart/htmlreport.png and /tmp/tmp0lrW9P/hdCxpT7ujz/cython-0.29.14/docs/src/quickstart/htmlreport.png differ
diff -Nru cython-0.26.1/docs/src/quickstart/install.rst cython-0.29.14/docs/src/quickstart/install.rst
--- cython-0.26.1/docs/src/quickstart/install.rst 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/docs/src/quickstart/install.rst 2018-11-24 09:20:06.000000000 +0000
@@ -1,8 +1,10 @@
+.. _install:
+
Installing Cython
=================
Many scientific Python distributions, such as Anaconda [Anaconda]_,
-Enthought Canopy [Canopy]_, Python(x,y) [Pythonxy]_, and Sage [Sage]_,
+Enthought Canopy [Canopy]_, and Sage [Sage]_,
bundle Cython and no setup is needed. Note however that if your
distribution ships a version of Cython which is too old you can still
use the instructions below to update Cython. Everything in this
@@ -20,7 +22,7 @@
- **Mac OS X** To retrieve gcc, one option is to install Apple's
XCode, which can be retrieved from the Mac OS X's install DVDs or
- from http://developer.apple.com.
+ from https://developer.apple.com/.
- **Windows** A popular option is to use the open source MinGW (a
Windows distribution of gcc). See the appendix for instructions for
@@ -33,19 +35,17 @@
.. dagss tried other forms of ReST lists and they didn't look nice
.. with rst2latex.
+The simplest way of installing Cython is by using ``pip``::
+
+ pip install Cython
+
+
The newest Cython release can always be downloaded from
http://cython.org. Unpack the tarball or zip file, enter the
directory, and then run::
python setup.py install
-If you have ``pip`` set up on your system (e.g. in a virtualenv or a
-recent Python version), you should be able to fetch Cython from PyPI
-and install it using
-
-::
-
- pip install Cython
For one-time builds, e.g. for CI/testing, on platforms that are not covered
by one of the wheel packages provided on PyPI, it is substantially faster
@@ -57,7 +57,6 @@
pip install Cython --install-option="--no-cython-compile"
-.. [Anaconda] http://docs.continuum.io/anaconda/
-.. [Canopy] https://enthought.com/products/canopy/
-.. [Pythonxy] http://www.pythonxy.com/
-.. [Sage] W. Stein et al., Sage Mathematics Software, http://sagemath.org
+.. [Anaconda] https://docs.anaconda.com/anaconda/
+.. [Canopy] https://www.enthought.com/product/canopy/
+.. [Sage] W. Stein et al., Sage Mathematics Software, http://www.sagemath.org/
diff -Nru cython-0.26.1/docs/src/quickstart/overview.rst cython-0.29.14/docs/src/quickstart/overview.rst
--- cython-0.26.1/docs/src/quickstart/overview.rst 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/docs/src/quickstart/overview.rst 2018-11-24 09:20:06.000000000 +0000
@@ -45,7 +45,7 @@
.. [Cython] G. Ewing, R. W. Bradshaw, S. Behnel, D. S. Seljebotn et al.,
The Cython compiler, http://cython.org.
-.. [IronPython] Jim Hugunin et al., http://www.codeplex.com/IronPython.
+.. [IronPython] Jim Hugunin et al., https://archive.codeplex.com/?p=IronPython.
.. [Jython] J. Huginin, B. Warsaw, F. Bock, et al.,
Jython: Python for the Java platform, http://www.jython.org.
.. [PyPy] The PyPy Group, PyPy: a Python implementation written in Python,
@@ -53,4 +53,4 @@
.. [Pyrex] G. Ewing, Pyrex: C-Extensions for Python,
http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/
.. [Python] G. van Rossum et al., The Python programming language,
- http://python.org.
+ https://www.python.org/.
diff -Nru cython-0.26.1/docs/src/reference/compilation.rst cython-0.29.14/docs/src/reference/compilation.rst
--- cython-0.26.1/docs/src/reference/compilation.rst 2017-08-25 16:06:31.000000000 +0000
+++ cython-0.29.14/docs/src/reference/compilation.rst 2018-11-24 09:20:06.000000000 +0000
@@ -1,582 +1,103 @@
.. highlight:: cython
-.. _compilation-reference:
-
=============
Compilation
=============
-Cython code, unlike Python, must be compiled. This happens in two stages:
-
- * A ``.pyx`` file is compiled by Cython to a ``.c`` file.
-
- * The ``.c`` file is compiled by a C compiler to a ``.so`` file (or a
- ``.pyd`` file on Windows)
+.. note::
+ The sections in this page were moved to the :ref:`compilation` in the userguide.
-The following sub-sections describe several ways to build your
-extension modules, and how to pass directives to the Cython compiler.
Compiling from the command line
===============================
-Run the Cython compiler command with your options and list of ``.pyx``
-files to generate. For example::
-
- $ cython -a yourmod.pyx
-
-This creates a ``yourmod.c`` file, and the ``-a`` switch produces an
-annotated html file of the source code. Pass the ``-h`` flag for a
-complete list of supported flags.
-
-Compiling your ``.c`` files will vary depending on your operating
-system. Python documentation for writing extension modules should
-have some details for your system. Here we give an example on a Linux
-system::
-
- $ gcc -shared -pthread -fPIC -fwrapv -O2 -Wall -fno-strict-aliasing \
- -I/usr/include/python2.7 -o yourmod.so yourmod.c
-
-[``gcc`` will need to have paths to your included header files and
-paths to libraries you need to link with]
-
-A ``yourmod.so`` file is now in the same directory and your module,
-``yourmod``, is available for you to import as you normally would.
-
+This section was moved to :ref:`compiling_command_line`.
Compiling with ``distutils``
============================
-The ``distutils`` package is part of the standard library. It is the standard
-way of building Python packages, including native extension modules. The
-following example configures the build for a Cython file called *hello.pyx*.
-First, create a ``setup.py`` script::
-
- from distutils.core import setup
- from Cython.Build import cythonize
-
- setup(
- name = "My hello app",
- ext_modules = cythonize('hello.pyx'), # accepts a glob pattern
- )
-
-Now, run the command ``python setup.py build_ext --inplace`` in your
-system's command shell and you are done. Import your new extension
-module into your python shell or script as normal.
-
-The ``cythonize`` command also allows for multi-threaded compilation and
-dependency resolution. Recompilation will be skipped if the target file
-is up to date with its main source file and dependencies.
-
+This section was moved to :ref:`basic_setup.py`.
Configuring the C-Build
------------------------
-If you have include files in non-standard places you can pass an
-``include_path`` parameter to ``cythonize``::
-
- from distutils.core import setup
- from Cython.Build import cythonize
-
- setup(
- name = "My hello app",
- ext_modules = cythonize("src/*.pyx", include_path = [...]),
- )
-
-Often, Python packages that offer a C-level API provide a way to find
-the necessary include files, e.g. for NumPy::
-
- include_path = [numpy.get_include()]
-
-Note for Numpy users. Despite this, you will still get warnings like the
-following from the compiler, because Cython is using a deprecated Numpy API::
-
- .../include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
-
-For the time being, it is just a warning that you can ignore.
-
-If you need to specify compiler options, libraries to link with or other
-linker options you will need to create ``Extension`` instances manually
-(note that glob syntax can still be used to specify multiple extensions
-in one line)::
-
- from distutils.core import setup
- from distutils.extension import Extension
- from Cython.Build import cythonize
-
- extensions = [
- Extension("primes", ["primes.pyx"],
- include_dirs = [...],
- libraries = [...],
- library_dirs = [...]),
- # Everything but primes.pyx is included here.
- Extension("*", ["*.pyx"],
- include_dirs = [...],
- libraries = [...],
- library_dirs = [...]),
- ]
- setup(
- name = "My hello app",
- ext_modules = cythonize(extensions),
- )
-
-Note that when using setuptools, you should import it before Cython as
-setuptools may replace the ``Extension`` class in distutils. Otherwise,
-both might disagree about the class to use here.
-
-If your options are static (for example you do not need to call a tool like
-``pkg-config`` to determine them) you can also provide them directly in your
-.pyx or .pxd source file using a special comment block at the start of the file::
-
- # distutils: libraries = spam eggs
- # distutils: include_dirs = /opt/food/include
-
-If you cimport multiple .pxd files defining libraries, then Cython
-merges the list of libraries, so this works as expected (similarly
-with other options, like ``include_dirs`` above).
-
-If you have some C files that have been wrapped with Cython and you want to
-compile them into your extension, you can define the distutils ``sources``
-parameter::
-
- # distutils: sources = helper.c, another_helper.c
-
-Note that these sources are added to the list of sources of the current
-extension module. Spelling this out in the :file:`setup.py` file looks
-as follows::
-
- from distutils.core import setup
- from Cython.Build import cythonize
- from distutils.extension import Extension
-
- sourcefiles = ['example.pyx', 'helper.c', 'another_helper.c']
-
- extensions = [Extension("example", sourcefiles)]
-
- setup(
- ext_modules = cythonize(extensions)
- )
-
-The :class:`Extension` class takes many options, and a fuller explanation can
-be found in the `distutils documentation`_. Some useful options to know about
-are ``include_dirs``, ``libraries``, and ``library_dirs`` which specify where
-to find the ``.h`` and library files when linking to external libraries.
-
-.. _distutils documentation: http://docs.python.org/extending/building.html
-
-Sometimes this is not enough and you need finer customization of the
-distutils :class:`Extension`.
-To do this, you can provide a custom function ``create_extension``
-to create the final :class:`Extension` object after Cython has processed
-the sources, dependencies and ``# distutils`` directives but before the
-file is actually Cythonized.
-This function takes 2 arguments ``template`` and ``kwds``, where
-``template`` is the :class:`Extension` object given as input to Cython
-and ``kwds`` is a :class:`dict` with all keywords which should be used
-to create the :class:`Extension`.
-The function ``create_extension`` must return a 2-tuple
-``(extension, metadata)``, where ``extension`` is the created
-:class:`Extension` and ``metadata`` is metadata which will be written
-as JSON at the top of the generated C files. This metadata is only used
-for debugging purposes, so you can put whatever you want in there
-(as long as it can be converted to JSON).
-The default function (defined in ``Cython.Build.Dependencies``) is::
-
- def default_create_extension(template, kwds):
- if 'depends' in kwds:
- include_dirs = kwds.get('include_dirs', []) + ["."]
- depends = resolve_depends(kwds['depends'], include_dirs)
- kwds['depends'] = sorted(set(depends + template.depends))
-
- t = template.__class__
- ext = t(**kwds)
- metadata = dict(distutils=kwds, module_name=kwds['name'])
- return (ext, metadata)
-
-In case that you pass a string instead of an :class:`Extension` to
-``cythonize()``, the ``template`` will be an :class:`Extension` without
-sources. For example, if you do ``cythonize("*.pyx")``,
-the ``template`` will be ``Extension(name="*.pyx", sources=[])``.
-
-Just as an example, this adds ``mylib`` as library to every extension::
-
- from Cython.Build.Dependencies import default_create_extension
-
- def my_create_extension(template, kwds):
- libs = kwds.get('libraries', []) + ["mylib"]
- kwds['libraries'] = libs
- return default_create_extension(template, kwds)
+This section was moved to :ref:`basic_setup.py`.
- ext_modules = cythonize(..., create_extension=my_create_extension)
+Cythonize arguments
+-------------------
-.. note::
+This section was moved to :ref:`cythonize_arguments`.
- If you Cythonize in parallel (using the ``nthreads`` argument),
- then the argument to ``create_extension`` must be pickleable.
- In particular, it cannot be a lambda function.
+Compiler options
+----------------
+This section was moved to :ref:`compiler_options`.
Distributing Cython modules
----------------------------
-It is strongly recommended that you distribute the generated ``.c`` files as well
-as your Cython sources, so that users can install your module without needing
-to have Cython available.
-
-It is also recommended that Cython compilation not be enabled by default in the
-version you distribute. Even if the user has Cython installed, he/she probably
-doesn't want to use it just to install your module. Also, the installed version
-may not be the same one you used, and may not compile your sources correctly.
-
-This simply means that the :file:`setup.py` file that you ship with will just
-be a normal distutils file on the generated `.c` files, for the basic example
-we would have instead::
-
- from distutils.core import setup
- from distutils.extension import Extension
-
- setup(
- ext_modules = [Extension("example", ["example.c"])]
- )
-
-This is easy to combine with :func:`cythonize` by changing the file extension
-of the extension module sources::
-
- from distutils.core import setup
- from distutils.extension import Extension
-
- USE_CYTHON = ... # command line option, try-import, ...
-
- ext = '.pyx' if USE_CYTHON else '.c'
-
- extensions = [Extension("example", ["example"+ext])]
-
- if USE_CYTHON:
- from Cython.Build import cythonize
- extensions = cythonize(extensions)
-
- setup(
- ext_modules = extensions
- )
-
-If you have many extensions and want to avoid the additional complexity in the
-declarations, you can declare them with their normal Cython sources and then
-call the following function instead of ``cythonize()`` to adapt the sources
-list in the Extensions when not using Cython::
-
- import os.path
-
- def no_cythonize(extensions, **_ignore):
- for extension in extensions:
- sources = []
- for sfile in extension.sources:
- path, ext = os.path.splitext(sfile)
- if ext in ('.pyx', '.py'):
- if extension.language == 'c++':
- ext = '.cpp'
- else:
- ext = '.c'
- sfile = path + ext
- sources.append(sfile)
- extension.sources[:] = sources
- return extensions
-
-Another option is to make Cython a setup dependency of your system and use
-Cython's build_ext module which runs ``cythonize`` as part of the build process::
-
- setup(
- setup_requires=[
- 'cython>=0.x',
- ],
- extensions = [Extension("*", ["*.pyx"])],
- cmdclass={'build_ext': Cython.Build.build_ext},
- ...
- )
-
-If you want to expose the C-level interface of your library for other
-libraries to cimport from, use package_data to install the ``.pxd`` files,
-e.g.::
-
- setup(
- package_data = {
- 'my_package': ['*.pxd'],
- 'my_package/sub_package': ['*.pxd'],
- },
- ...
- )
-
-These ``.pxd`` files need not have corresponding ``.pyx``
-modules if they contain purely declarations of external libraries.
-
-Compiling with ``pyximport``
-=============================
-
-For generating Cython code right in your pure python module just type::
-
- >>> import pyximport; pyximport.install()
- >>> import helloworld
- Hello World
-
-This allows you to automatically run Cython on every ``.pyx`` that
-Python is trying to import. You should use this for simple Cython
-builds only where no extra C libraries and no special building setup
-is needed.
-
-In the case that Cython fails to compile a Python module, *pyximport*
-will fall back to loading the source modules instead.
-
-It is also possible to compile new ``.py`` modules that are being
-imported (including the standard library and installed packages). For
-using this feature, just tell that to ``pyximport``::
+This section was moved to :ref:`distributing_cython_modules`.
+
+Integrating multiple modules
+============================
+
+This section was moved to :ref:`integrating_multiple_modules`.
+
+Compiling with :mod:`pyximport`
+===============================
+
+This section was moved to :ref:`pyximport`.
+
+Arguments
+---------
- >>> pyximport.install(pyimport = True)
+Dependency Handling
+--------------------
+
+Limitations
+------------
Compiling with ``cython.inline``
=================================
-One can also compile Cython in a fashion similar to SciPy's ``weave.inline``.
-For example::
-
- >>> import cython
- >>> def f(a):
- ... ret = cython.inline("return a+b", b=3)
- ...
-
-Unbound variables are automatically pulled from the surrounding local
-and global scopes, and the result of the compilation is cached for
-efficient re-use.
+This section was moved to :ref:`compiling_with_cython_inline`.
Compiling with Sage
===================
-The Sage notebook allows transparently editing and compiling Cython
-code simply by typing ``%cython`` at the top of a cell and evaluate
-it. Variables and functions defined in a Cython cell are imported into the
-running session. Please check `Sage documentation
- `_ for details.
+This section was moved to :ref:`compiling_with_sage`.
-You can tailor the behavior of the Cython compiler by specifying the
-directives below.
+Compiling with a Jupyter Notebook
+=================================
-.. _compiler-directives:
+This section was moved to :ref:`compiling_notebook`.
Compiler directives
====================
-Compiler directives are instructions which affect the behavior of
-Cython code. Here is the list of currently supported directives:
-
-``binding`` (True / False)
- Controls whether free functions behave more like Python's CFunctions
- (e.g. :func:`len`) or, when set to True, more like Python's functions.
- When enabled, functions will bind to an instance when looked up as a
- class attribute (hence the name) and will emulate the attributes
- of Python functions, including introspections like argument names and
- annotations.
- Default is False.
-
-``boundscheck`` (True / False)
- If set to False, Cython is free to assume that indexing operations
- ([]-operator) in the code will not cause any IndexErrors to be
- raised. Lists, tuples, and strings are affected only if the index
- can be determined to be non-negative (or if ``wraparound`` is False).
- Conditions
- which would normally trigger an IndexError may instead cause
- segfaults or data corruption if this is set to False.
- Default is True.
-
-``wraparound`` (True / False)
- In Python arrays can be indexed relative to the end. For example
- A[-1] indexes the last value of a list. In C negative indexing is
- not supported. If set to False, Cython will neither check for nor
- correctly handle negative indices, possibly causing segfaults or
- data corruption.
- Default is True.
-
-``initializedcheck`` (True / False)
- If set to True, Cython checks that a memoryview is initialized
- whenever its elements are accessed or assigned to. Setting this
- to False disables these checks.
- Default is True.
-
-``nonecheck`` (True / False)
- If set to False, Cython is free to assume that native field
- accesses on variables typed as an extension type, or buffer
- accesses on a buffer variable, never occurs when the variable is
- set to ``None``. Otherwise a check is inserted and the
- appropriate exception is raised. This is off by default for
- performance reasons. Default is False.
-
-``overflowcheck`` (True / False)
- If set to True, raise errors on overflowing C integer arithmetic
- operations. Incurs a modest runtime penalty, but is much faster than
- using Python ints. Default is False.
-
-``overflowcheck.fold`` (True / False)
- If set to True, and overflowcheck is True, check the overflow bit for
- nested, side-effect-free arithmetic expressions once rather than at every
- step. Depending on the compiler, architecture, and optimization settings,
- this may help or hurt performance. A simple suite of benchmarks can be
- found in ``Demos/overflow_perf.pyx``. Default is True.
-
-``embedsignature`` (True / False)
- If set to True, Cython will embed a textual copy of the call
- signature in the docstring of all Python visible functions and
- classes. Tools like IPython and epydoc can thus display the
- signature, which cannot otherwise be retrieved after
- compilation. Default is False.
-
-``cdivision`` (True / False)
- If set to False, Cython will adjust the remainder and quotient
- operators C types to match those of Python ints (which differ when
- the operands have opposite signs) and raise a
- ``ZeroDivisionError`` when the right operand is 0. This has up to
- a 35% speed penalty. If set to True, no checks are performed. See
- `CEP 516 `_. Default
- is False.
-
-``cdivision_warnings`` (True / False)
- If set to True, Cython will emit a runtime warning whenever
- division is performed with negative operands. See `CEP 516
- `_. Default is
- False.
-
-``always_allow_keywords`` (True / False)
- Avoid the ``METH_NOARGS`` and ``METH_O`` when constructing
- functions/methods which take zero or one arguments. Has no effect
- on special methods and functions with more than one argument. The
- ``METH_NOARGS`` and ``METH_O`` signatures provide faster
- calling conventions but disallow the use of keywords.
-
-``profile`` (True / False)
- Write hooks for Python profilers into the compiled C code. Default
- is False.
-
-``linetrace`` (True / False)
- Write line tracing hooks for Python profilers or coverage reporting
- into the compiled C code. This also enables profiling. Default is
- False. Note that the generated module will not actually use line
- tracing, unless you additionally pass the C macro definition
- ``CYTHON_TRACE=1`` to the C compiler (e.g. using the distutils option
- ``define_macros``). Define ``CYTHON_TRACE_NOGIL=1`` to also include
- ``nogil`` functions and sections.
-
-``infer_types`` (True / False)
- Infer types of untyped variables in function bodies. Default is
- None, indicating that only safe (semantically-unchanging) inferences
- are allowed.
- In particular, inferring *integral* types for variables *used in arithmetic
- expressions* is considered unsafe (due to possible overflow) and must be
- explicitly requested.
-
-``language_level`` (2/3)
- Globally set the Python language level to be used for module
- compilation. Default is compatibility with Python 2. To enable
- Python 3 source code semantics, set this to 3 at the start of a
- module or pass the "-3" command line option to the compiler.
- Note that cimported and included source files inherit this
- setting from the module being compiled, unless they explicitly
- set their own language level.
-
-``c_string_type`` (bytes / str / unicode)
- Globally set the type of an implicit coercion from char* or std::string.
-
-``c_string_encoding`` (ascii, default, utf-8, etc.)
- Globally set the encoding to use when implicitly coercing char* or std:string
- to a unicode object. Coercion from a unicode object to C type is only allowed
- when set to ``ascii`` or ``default``, the latter being utf-8 in Python 3 and
- nearly-always ascii in Python 2.
-
-``type_version_tag`` (True / False)
- Enables the attribute cache for extension types in CPython by setting the
- type flag ``Py_TPFLAGS_HAVE_VERSION_TAG``. Default is True, meaning that
- the cache is enabled for Cython implemented types. To disable it
- explicitly in the rare cases where a type needs to juggle with its ``tp_dict``
- internally without paying attention to cache consistency, this option can
- be set to False.
-
-``unraisable_tracebacks`` (True / False)
- Whether to print tracebacks when suppressing unraisable exceptions.
-
+This section was moved to :ref:`compiler-directives`.
Configurable optimisations
--------------------------
-``optimize.use_switch`` (True / False)
- Whether to expand chained if-else statements (including statements like
- ``if x == 1 or x == 2:``) into C switch statements. This can have performance
- benefits if there are lots of values but cause compiler errors if there are any
- duplicate values (which may not be detectable at Cython compile time for all
- C constants). Default is True.
-
-``optimize.unpack_method_calls`` (True / False)
- Cython can generate code that optimistically checks for Python method objects
- at call time and unpacks the underlying function to call it directly. This
- can substantially speed up method calls, especially for builtins, but may also
- have a slight negative performance impact in some cases where the guess goes
- completely wrong.
- Disabling this option can also reduce the code size. Default is True.
+This section was moved to :ref:`configurable_optimisations`.
+Warnings
+--------
+
+This section was moved to :ref:`warnings`.
How to set directives
---------------------
+This section was moved to :ref:`how_to_set_directives`.
+
Globally
:::::::::
-One can set compiler directives through a special header comment at the top of the file, like this::
-
- #!python
- #cython: language_level=3, boundscheck=False
-
-The comment must appear before any code (but can appear after other
-comments or whitespace).
-
-One can also pass a directive on the command line by using the -X switch::
-
- $ cython -X boundscheck=True ...
-
-Directives passed on the command line will override directives set in
-header comments.
-
Locally
::::::::
-For local blocks, you need to cimport the special builtin ``cython``
-module::
-
- #!python
- cimport cython
-
-Then you can use the directives either as decorators or in a with
-statement, like this::
-
- #!python
- @cython.boundscheck(False) # turn off boundscheck for this function
- def f():
- ...
- # turn it temporarily on again for this block
- with cython.boundscheck(True):
- ...
-
-.. Warning:: These two methods of setting directives are **not**
- affected by overriding the directive on the command-line using the
- -X option.
-
In :file:`setup.py`
:::::::::::::::::::
-
-Compiler directives can also be set in the :file:`setup.py` file by passing a keyword
-argument to ``cythonize``::
-
- from distutils.core import setup
- from Cython.Build import cythonize
-
- setup(
- name = "My hello app",
- ext_modules = cythonize('hello.pyx', compiler_directives={'embedsignature': True}),
- )
-
-This will override the default directives as specified in the ``compiler_directives`` dictionary.
-Note that explicit per-file or local directives as explained above take precedence over the
-values passed to ``cythonize``.
diff -Nru cython-0.26.1/docs/src/reference/extension_types.rst cython-0.29.14/docs/src/reference/extension_types.rst
--- cython-0.26.1/docs/src/reference/extension_types.rst 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/docs/src/reference/extension_types.rst 2018-11-24 09:20:06.000000000 +0000
@@ -1,430 +1,100 @@
.. highlight:: cython
-.. _extension_types:
-
***************
Extension Types
***************
-* Normal Python as well as extension type classes can be defined.
-* Extension types:
-
- * Are considered by Python as "built-in" types.
- * Can be used to wrap arbitrary C-data structures, and provide a Python-like interface to them from Python.
- * Attributes and methods can be called from Python or Cython code
- * Are defined by the ``cdef class`` statement.
-
-::
-
- cdef class Shrubbery:
-
- cdef int width, height
-
- def __init__(self, w, h):
- self.width = w
- self.height = h
+.. note::
- def describe(self):
- print "This shrubbery is", self.width, \
- "by", self.height, "cubits."
+ The sections in this page were moved to the :ref:`extension-types`
+ and :ref:`special-methods` in the userguide.
==========
Attributes
==========
-* Are stored directly in the object's C struct.
-* Are fixed at compile time.
-
- * You can't add attributes to an extension type instance at run time like in normal Python, unless you define a ``__dict__`` attribute.
- * You can sub-class the extension type in Python to add attributes at run-time.
-
-* There are two ways to access extension type attributes:
-
- * By Python look-up.
-
- * Python code's only method of access.
-
- * By direct access to the C struct from Cython code.
-
- * Cython code can use either method of access, though.
-
-* By default, extension type attributes are:
-
- * Only accessible by direct access.
- * Not accessible from Python code.
-
-* To make attributes accessible to Python, they must be declared ``public`` or ``readonly``::
-
- cdef class Shrubbery:
- cdef public int width, height
- cdef readonly float depth
-
- * The ``width`` and ``height`` attributes are readable and writable from Python code.
- * The ``depth`` attribute is readable but not writable.
-
-.. note::
- .. note::
- You can only expose simple C types, such as ints, floats, and strings, for Python access. You can also expose Python-valued attributes.
-
- .. note::
- The ``public`` and ``readonly`` options apply only to Python access, not direct access. All the attributes of an extension type are always readable and writable by C-level access.
-
+This section was moved to :ref:`readonly`.
=======
Methods
=======
-* ``self`` is used in extension type methods just like it normally is in Python.
-* See **Functions and Methods**; all of which applies here.
-
==========
Properties
==========
-* Cython provides a special (deprecated) syntax::
-
- cdef class Spam:
-
- property cheese:
-
- "A doc string can go here."
-
- def __get__(self):
- # This is called when the property is read.
- ...
-
- def __set__(self, value):
- # This is called when the property is written.
- ...
-
- def __del__(self):
- # This is called when the property is deleted.
-
-* The ``__get__()``, ``__set__()``, and ``__del__()`` methods are all optional.
-
- * If they are omitted, an exception is raised on attribute access.
-
-* Below, is a full example that defines a property which can..
-
- * Add to a list each time it is written to (``"__set__"``).
- * Return the list when it is read (``"__get__"``).
- * Empty the list when it is deleted (``"__del__"``).
-
-::
-
- # cheesy.pyx
- cdef class CheeseShop:
-
- cdef object cheeses
-
- def __cinit__(self):
- self.cheeses = []
-
- property cheese: # note that this syntax is deprecated
-
- def __get__(self):
- return "We don't have: %s" % self.cheeses
-
- def __set__(self, value):
- self.cheeses.append(value)
-
- def __del__(self):
- del self.cheeses[:]
-
- # Test input
- from cheesy import CheeseShop
-
- shop = CheeseShop()
- print shop.cheese
-
- shop.cheese = "camembert"
- print shop.cheese
-
- shop.cheese = "cheddar"
- print shop.cheese
-
- del shop.cheese
- print shop.cheese
-
-::
-
- # Test output
- We don't have: []
- We don't have: ['camembert']
- We don't have: ['camembert', 'cheddar']
- We don't have: []
-
+This section was moved to :ref:`properties`.
===============
Special Methods
===============
-.. note::
-
- #. The semantics of Cython's special methods are similar in principle to that of Python's.
- #. There are substantial differences in some behavior.
- #. Some Cython special methods have no Python counter-part.
-
-* See the :ref:`special_methods_table` for the many that are available.
-
+This section was moved to :ref:`special-methods`.
Declaration
===========
-* Must be declared with ``def`` and cannot be declared with ``cdef``.
-* Performance is not affected by the ``def`` declaration because of special calling conventions
+This section was moved to :ref:`declaration`.
Docstrings
==========
-* Docstrings are not supported yet for some special method types.
-* They can be included in the source, but may not appear in the corresponding ``__doc__`` attribute at run-time.
-
- * This a Python library limitation because the ``PyTypeObject`` data structure is limited
-
+This section was moved to :ref:`docstrings`.
Initialization: ``__cinit__()`` and ``__init__()``
==================================================
-* Any arguments passed to the extension type's constructor
- will be passed to both initialization methods.
-
-* ``__cinit__()`` is where you should perform C-level initialization of the object
-
- * This includes any allocation of C data structures.
- * **Caution** is warranted as to what you do in this method.
-
- * The object may not be fully valid Python object when it is called.
- * Calling Python objects, including the extensions own methods, may be hazardous.
-
- * By the time ``__cinit__()`` is called...
-
- * Memory has been allocated for the object.
- * All C-level attributes have been initialized to 0 or null.
- * Python have been initialized to ``None``, but you can not rely on that for each occasion.
- * This initialization method is guaranteed to be called exactly once.
-
- * For Extensions types that inherit a base type:
-
- * The ``__cinit__()`` method of the base type is automatically called before this one.
- * The inherited ``__cinit__()`` method can not be called explicitly.
- * Passing modified argument lists to the base type must be done through ``__init__()``.
- * It may be wise to give the ``__cinit__()`` method both ``"*"`` and ``"**"`` arguments.
-
- * Allows the method to accept or ignore additional arguments.
- * Eliminates the need for a Python level sub-class, that changes the ``__init__()``
- method's signature, to have to override both the ``__new__()`` and ``__init__()`` methods.
-
- * If ``__cinit__()`` is declared to take no arguments except ``self``, it will ignore any
- extra arguments passed to the constructor without complaining about a signature mis-match.
-
-
-* ``__init__()`` is for higher-level initialization and is safer for Python access.
-
- * By the time this method is called, the extension type is a fully valid Python object.
- * All operations are safe.
- * This method may sometimes be called more than once, or possibly not at all.
-
- * Take this into consideration to make sure the design of your other methods are robust of this fact.
-
-Note that all constructor arguments will be passed as Python objects.
-This implies that non-convertible C types such as pointers or C++ objects
-cannot be passed into the constructor from Cython code. If this is needed,
-use a factory function instead that handles the object initialisation.
-It often helps to directly call ``__new__()`` in this function to bypass the
-call to the ``__init__()`` constructor.
-
+This section was moved to :ref:`initialisation_methods`.
Finalization: ``__dealloc__()``
===============================
-* This method is the counter-part to ``__cinit__()``.
-* Any C-data that was explicitly allocated in the ``__cinit__()`` method should be freed here.
-* Use caution in this method:
-
- * The Python object to which this method belongs may not be completely intact at this point.
- * Avoid invoking any Python operations that may touch the object.
- * Don't call any of this object's methods.
- * It's best to just deallocate C-data structures here.
-
-* All Python attributes of your extension type object are deallocated by Cython after the ``__dealloc__()`` method returns.
+This section was moved to :ref:`finalization_method`.
Arithmetic Methods
==================
-.. note:: Most of these methods behave differently than in Python
-
-* There are not "reversed" versions of these methods... there is no __radd__() for instance.
-* If the first operand cannot perform the operation, the same method of the second operand is called, with the operands in the same order.
-* Do not rely on the first parameter of these methods, being ``"self"`` or the right type.
-* The types of both operands should be tested before deciding what to do.
-* Return ``NotImplemented`` for unhandled, mis-matched operand types.
-* The previously mentioned points..
-
- * Also apply to 'in-place' method ``__ipow__()``.
- * Do not apply to other 'in-place' methods like ``__iadd__()``, in that these always take ``self`` as the first argument.
-
+This section was moved to :ref:`arithmetic_methods`.
Rich Comparisons
================
-.. note:: There are no separate methods for individual rich comparison operations.
-
-* A single special method called ``__richcmp__()`` replaces all the individual rich compare, special method types.
-* ``__richcmp__()`` takes an integer argument, indicating which operation is to be performed as shown in the table below.
-
- +-----+-----+
- | < | 0 |
- +-----+-----+
- | == | 2 |
- +-----+-----+
- | > | 4 |
- +-----+-----+
- | <= | 1 |
- +-----+-----+
- | != | 3 |
- +-----+-----+
- | >= | 5 |
- +-----+-----+
-
-
-
+This section was moved to :ref:`righ_comparisons`.
The ``__next__()`` Method
=========================
-* Extension types used to expose an iterator interface should define a ``__next__()`` method.
-* **Do not** explicitly supply a ``next()`` method, because Python does that for you automatically.
-
+This section was moved to :ref:`the__next__method`.
===========
Subclassing
===========
-* An extension type may inherit from a built-in type or another extension type::
-
- cdef class Parrot:
- ...
-
- cdef class Norwegian(Parrot):
- ...
-
-* A complete definition of the base type must be available to Cython
-
- * If the base type is a built-in type, it must have been previously declared as an ``extern`` extension type.
- * ``cimport`` can be used to import the base type, if the extern declared base type is in a ``.pxd`` definition file.
-
- * In Cython, multiple inheritance is not permitted.. singular inheritance only
-
-* Cython extension types can also be sub-classed in Python.
-
- * Here multiple inheritance is permissible as is normal for Python.
- * Even multiple extension types may be inherited, but C-layout of all the base classes must be compatible.
-
+This section was moved to :ref:`subclassing`.
====================
Forward Declarations
====================
-* Extension types can be "forward-declared".
-* This is necessary when two extension types refer to each other::
-
- cdef class Shrubbery # forward declaration
-
- cdef class Shrubber:
- cdef Shrubbery work_in_progress
-
- cdef class Shrubbery:
- cdef Shrubber creator
-
-* An extension type that has a base-class, requires that both forward-declarations be specified::
-
- cdef class A(B)
-
- ...
-
- cdef class A(B):
- # attributes and methods
-
+This section was moved to :ref:`forward_declaring_extension_types`.
========================
Extension Types and None
========================
-* Parameters and C-variables declared as an Extension type, may take the value of ``None``.
-* This is analogous to the way a C-pointer can take the value of ``NULL``.
-
-.. note::
- #. Exercise caution when using ``None``
- #. Read this section carefully.
-
-* There is no problem as long as you are performing Python operations on it.
-
- * This is because full dynamic type checking is applied
-
-* When accessing an extension type's C-attributes, **make sure** it is not ``None``.
-
- * Cython does not check this for reasons of efficiency.
-
-* Be very aware of exposing Python functions that take extension types as arguments::
-
- def widen_shrubbery(Shrubbery sh, extra_width): # This is
- sh.width = sh.width + extra_width
-
- * Users could **crash** the program by passing ``None`` for the ``sh`` parameter.
- * This could be avoided by::
-
- def widen_shrubbery(Shrubbery sh, extra_width):
- if sh is None:
- raise TypeError
- sh.width = sh.width + extra_width
-
- * Cython provides a more convenient way with a ``not None`` clause::
-
- def widen_shrubbery(Shrubbery sh not None, extra_width):
- sh.width = sh.width + extra_width
-
- * Now this function automatically checks that ``sh`` is not ``None``, as well as that is the right type.
-
-* ``not None`` can only be used in Python functions (declared with ``def`` **not** ``cdef``).
-* For ``cdef`` functions, you will have to provide the check yourself.
-* The ``self`` parameter of an extension type is guaranteed to **never** be ``None``.
-* When comparing a value ``x`` with ``None``, and ``x`` is a Python object, note the following:
-
- * ``x is None`` and ``x is not None`` are very efficient.
-
- * They translate directly to C-pointer comparisons.
-
- * ``x == None`` and ``x != None`` or ``if x: ...`` (a boolean condition), will invoke Python operations and will therefore be much slower.
+This section was moved to :ref:`extension_types_and_none`.
================
Weak Referencing
================
-* By default, weak references are not supported.
-* It can be enabled by declaring a C attribute of the ``object`` type called ``__weakref__()``::
-
- cdef class ExplodingAnimal:
- """This animal will self-destruct when it is
- no longer strongly referenced."""
-
- cdef object __weakref__
+This section was moved to :ref:`making_extension_types_weak_referenceable`.
==================
Dynamic Attributes
==================
-* By default, you cannot dynamically add attributes to a ``cdef class`` instance at runtime.
-* It can be enabled by declaring a C attribute of the ``dict`` type called ``__dict__``::
-
- cdef class ExtendableAnimal:
- """This animal can be extended with new
- attributes at runtime."""
-
- cdef dict __dict__
-
-.. note::
- #. This can have a performance penalty, especially when using ``cpdef`` methods in a class.
+This section was moved to :ref:`dynamic_attributes`.
=========================
External and Public Types
@@ -434,118 +104,20 @@
Public
======
-* When an extension type is declared ``public``, Cython will generate a C-header (".h") file.
-* The header file will contain the declarations for it's **object-struct** and it's **type-object**.
-* External C-code can now access the attributes of the extension type.
-
+This section was moved to :ref:`public`.
External
========
-* An ``extern`` extension type allows you to gain access to the internals of:
-
- * Python objects defined in the Python core.
- * Non-Cython extension modules
-
-* The following example lets you get at the C-level members of Python's built-in "complex" object::
-
- cdef extern from "complexobject.h":
-
- struct Py_complex:
- double real
- double imag
-
- ctypedef class __builtin__.complex [object PyComplexObject]:
- cdef Py_complex cval
-
- # A function which uses the above type
- def spam(complex c):
- print "Real:", c.cval.real
- print "Imag:", c.cval.imag
-
-.. note:: Some important things in the example:
- #. ``ctypedef`` has been used because Python's header file has the struct declared with::
-
- ctypedef struct {
- ...
- } PyComplexObject;
-
- #. The module of where this type object can be found is specified along side the name of the extension type. See **Implicit Importing**.
-
- #. When declaring an external extension type...
-
- * Don't declare any methods, because they are Python method class the are not needed.
- * Similar to **structs** and **unions**, extension classes declared inside a ``cdef extern from`` block only need to declare the C members which you will actually need to access in your module.
-
+This section was moved to :ref:`external_extension_types`.
Name Specification Clause
=========================
-.. note:: Only available to **public** and **extern** extension types.
-
-* Example::
-
- [object object_struct_name, type type_object_name ]
-
-* ``object_struct_name`` is the name to assume for the type's C-struct.
-* ``type_object_name`` is the name to assume for the type's statically declared type-object.
-* The object and type clauses can be written in any order.
-* For ``cdef extern from`` declarations, This clause **is required**.
-
- * The object clause is required because Cython must generate code that is compatible with the declarations in the header file.
- * Otherwise the object clause is optional.
-
-* For public extension types, both the object and type clauses **are required** for Cython to generate code that is compatible with external C-code.
+This section was moved to :ref:`name_specification_clause`.
================================
Type Names vs. Constructor Names
================================
-* In a Cython module, the name of an extension type serves two distinct purposes:
-
- #. When used in an expression, it refers to a "module-level" global variable holding the type's constructor (i.e. it's type-object)
- #. It can also be used as a C-type name to declare a "type" for variables, arguments, and return values.
-
-* Example::
-
- cdef extern class MyModule.Spam:
- ...
-
- * The name "Spam" serves both of these roles.
- * Only "Spam" can be used as the type-name.
- * The constructor can be referred to by other names.
- * Upon an explicit import of "MyModule"...
-
- * ``MyModule.Spam()`` could be used as the constructor call.
- * ``MyModule.Spam`` could not be used as a type-name
-
-* When an "as" clause is used, the name specified takes over both roles::
-
- cdef extern class MyModule.Spam as Yummy:
- ...
-
- * ``Yummy`` becomes both type-name and a name for the constructor.
- * There other ways of course, to get hold of the constructor, but ``Yummy`` is the only usable type-name.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+This section was moved to :ref:`types_names_vs_constructor_names`.
diff -Nru cython-0.26.1/docs/src/reference/index.rst cython-0.29.14/docs/src/reference/index.rst
--- cython-0.26.1/docs/src/reference/index.rst 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/src/reference/index.rst 2018-11-24 09:20:06.000000000 +0000
@@ -11,12 +11,7 @@
:maxdepth: 2
compilation
- language_basics
- extension_types
- interfacing_with_other_code
- special_mention
- limitations
- directives
+
Indices and tables
------------------
diff -Nru cython-0.26.1/docs/src/reference/language_basics.rst cython-0.29.14/docs/src/reference/language_basics.rst
--- cython-0.26.1/docs/src/reference/language_basics.rst 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/docs/src/reference/language_basics.rst 2018-11-24 09:20:06.000000000 +0000
@@ -1,23 +1,19 @@
.. highlight:: cython
-
-.. _language_basics:
-
***************
Language Basics
***************
+.. note::
+
+ The sections in this page were moved to the :ref:`language-basics` in the userguide.
+
=================
Cython File Types
=================
-There are three file types in Cython:
-
-* Implementation files carry a ``.pyx`` suffix
-* Definition files carry a ``.pxd`` suffix
-* Include files which carry a ``.pxi`` suffix
-
+This section was moved to :ref:`cython_file_types`.
Implementation File
===================
@@ -25,805 +21,156 @@
What can it contain?
--------------------
-* Basically anything Cythonic, but see below.
-
What can't it contain?
----------------------
-* There are some restrictions when it comes to **extension types**, if the extension type is
- already defined else where... **more on this later**
-
-
Definition File
===============
What can it contain?
--------------------
-* Any kind of C type declaration.
-* ``extern`` C function or variable declarations.
-* Declarations for module implementations.
-* The definition parts of **extension types**.
-* All declarations of functions, etc., for an **external library**
-
What can't it contain?
----------------------
-* Any non-extern C variable declaration.
-* Implementations of C or Python functions.
-* Python class definitions
-* Python executable statements.
-* Any declaration that is defined as **public** to make it accessible to other Cython modules.
-
- * This is not necessary, as it is automatic.
- * a **public** declaration is only needed to make it accessible to **external C code**.
-
What else?
----------
cimport
```````
-* Use the **cimport** statement, as you would Python's import statement, to access these files
- from other definition or implementation files.
-* **cimport** does not need to be called in ``.pyx`` file for ``.pxd`` file that has the
- same name, as they are already in the same namespace.
-* For cimport to find the stated definition file, the path to the file must be appended to the
- ``-I`` option of the **Cython compile command**.
-
compilation order
`````````````````
-* When a ``.pyx`` file is to be compiled, Cython first checks to see if a corresponding ``.pxd`` file
- exits and processes it first.
-
-
-
Include File
============
What can it contain?
--------------------
-* Any Cythonic code really, because the entire file is textually embedded at the location
- you prescribe.
-
How do I use it?
----------------
-* Include the ``.pxi`` file with an ``include`` statement like: ``include "spamstuff.pxi``
-* The ``include`` statement can appear anywhere in your Cython file and at any indentation level
-* The code in the ``.pxi`` file needs to be rooted at the "zero" indentation level.
-* The included code can itself contain other ``include`` statements.
-
-
====================
Declaring Data Types
====================
-
-As a dynamic language, Python encourages a programming style of considering classes and objects in terms of their methods and attributes, more than where they fit into the class hierarchy.
-
-This can make Python a very relaxed and comfortable language for rapid development, but with a price - the 'red tape' of managing data types is dumped onto the interpreter. At run time, the interpreter does a lot of work searching namespaces, fetching attributes and parsing argument and keyword tuples. This run-time ‘late binding’ is a major cause of Python’s relative slowness compared to ‘early binding’ languages such as C++.
-
-However with Cython it is possible to gain significant speed-ups through the use of ‘early binding’ programming techniques.
-
-.. note:: Typing is not a necessity
-
- Providing static typing to parameters and variables is convenience to speed up your code, but it is not a necessity. Optimize where and when needed.
-
+This section was moved to :ref:`declaring_data_types`.
The cdef Statement
==================
-The ``cdef`` statement is used to make C level declarations for:
-
-:Variables:
-
-::
-
- cdef int i, j, k
- cdef float f, g[42], *h
-
-:Structs:
-
-::
-
- cdef struct Grail:
- int age
- float volume
-
-..note Structs can be declared as ``cdef packed struct``, which has
-the same effect as the C directive ``#pragma pack(1)``.
-
-:Unions:
-
-::
-
- cdef union Food:
- char *spam
- float *eggs
-
-
-:Enums:
-
-::
-
- cdef enum CheeseType:
- cheddar, edam,
- camembert
-
-Declaring an enum as ``cpdef`` will create a :pep:`435`-style Python wrapper::
-
- cpdef enum CheeseState:
- hard = 1
- soft = 2
- runny = 3
-
-:Functions:
-
-::
-
- cdef int eggs(unsigned long l, float f):
- ...
-
-:Extension Types:
-
-::
-
- cdef class Spam:
- ...
-
-
-.. note:: Constants
-
- Constants can be defined by using an anonymous enum::
-
- cdef enum:
- tons_of_spam = 3
-
+This section was moved to :ref:`c_variable_and_type_definitions`.
Grouping cdef Declarations
==========================
-A series of declarations can grouped into a ``cdef`` block::
-
- cdef:
- struct Spam:
- int tons
-
- int i
- float f
- Spam *p
-
- void f(Spam *s):
- print s.tons, "Tons of spam"
-
-
-.. note:: ctypedef statement
+This section was moved to :ref:`c_variable_and_type_definitions`.
- The ``ctypedef`` statement is provided for naming types::
-
- ctypedef unsigned long ULong
-
- ctypedef int *IntPtr
+C types and Python classes
+==========================
+This section was moved to :ref:`types`.
Parameters
==========
-* Both C and Python **function** types can be declared to have parameters C data types.
-* Use normal C declaration syntax::
-
- def spam(int i, char *s):
- ...
-
- cdef int eggs(unsigned long l, float f):
- ...
-
-* As these parameters are passed into a Python declared function, they are magically **converted** to the specified C type value.
-
- * This holds true for only numeric and string types
-
-* If no type is specified for a parameter or a return value, it is assumed to be a Python object
-
- * The following takes two Python objects as parameters and returns a Python object::
-
- cdef spamobjs(x, y):
- ...
-
- .. note:: --
-
- This is different then C language behavior, where it is an int by default.
-
-
-
-* Python object types have reference counting performed according to the standard Python C-API rules:
-
- * Borrowed references are taken as parameters
- * New references are returned
-
-.. todo::
- link or label here the one ref count caveat for NumPy.
-
-* The name ``object`` can be used to explicitly declare something as a Python Object.
-
- * For sake of code clarity, it recommended to always use ``object`` explicitly in your code.
-
- * This is also useful for cases where the name being declared would otherwise be taken for a type::
-
- cdef foo(object int):
- ...
-
- * As a return type::
-
- cdef object foo(object int):
- ...
-
-.. todo::
- Do a see also here ..??
-
-Optional Arguments
-------------------
-
-* Are supported for ``cdef`` and ``cpdef`` functions
-* There are differences though whether you declare them in a ``.pyx`` file or a ``.pxd`` file:
-
- * When in a ``.pyx`` file, the signature is the same as it is in Python itself::
-
- cdef class A:
- cdef foo(self):
- print "A"
- cdef class B(A)
- cdef foo(self, x=None)
- print "B", x
- cdef class C(B):
- cpdef foo(self, x=True, int k=3)
- print "C", x, k
-
-
- * When in a ``.pxd`` file, the signature is different like this example: ``cdef foo(x=*)``::
-
- cdef class A:
- cdef foo(self)
- cdef class B(A)
- cdef foo(self, x=*)
- cdef class C(B):
- cpdef foo(self, x=*, int k=*)
-
-
- * The number of arguments may increase when subclassing, but the arg types and order must be the same.
-
-* There may be a slight performance penalty when the optional arg is overridden with one that does not have default values.
-
-Keyword-only Arguments
-=======================
-
-* As in Python 3, ``def`` functions can have keyword-only arguments listed after a ``"*"`` parameter and before a ``"**"`` parameter if any::
-
- def f(a, b, *args, c, d = 42, e, **kwds):
- ...
-
- * Shown above, the ``c``, ``d`` and ``e`` arguments can not be passed as positional arguments and must be passed as keyword arguments.
- * Furthermore, ``c`` and ``e`` are required keyword arguments since they do not have a default value.
-
-* If the parameter name after the ``"*"`` is omitted, the function will not accept any extra positional arguments::
-
- def g(a, b, *, c, d):
- ...
-
- * Shown above, the signature takes exactly two positional parameters and has two required keyword parameters
-
-
+This section was moved to :ref:`python_functions_vs_c_functions`.
Automatic Type Conversion
=========================
-* For basic numeric and string types, in most situations, when a Python object is used in the context of a C value and vice versa.
-
-* The following table summarizes the conversion possibilities, assuming ``sizeof(int) == sizeof(long)``:
-
- +----------------------------+--------------------+------------------+
- | C types | From Python types | To Python types |
- +============================+====================+==================+
- | [unsigned] char | int, long | int |
- +----------------------------+ | |
- | [unsigned] short | | |
- +----------------------------+ | |
- | int, long | | |
- +----------------------------+--------------------+------------------+
- | unsigned int | int, long | long |
- +----------------------------+ | |
- | unsigned long | | |
- +----------------------------+ | |
- | [unsigned] long long | | |
- +----------------------------+--------------------+------------------+
- | float, double, long double | int, long, float | float |
- +----------------------------+--------------------+------------------+
- | char * | str/bytes | str/bytes [#]_ |
- +----------------------------+--------------------+------------------+
- | struct | | dict |
- +----------------------------+--------------------+------------------+
-
-.. note::
- **Python String in a C Context**
-
- * A Python string, passed to C context expecting a ``char*``, is only valid as long as the Python string exists.
- * A reference to the Python string must be kept around for as long as the C string is needed.
- * If this can't be guaranteed, then make a copy of the C string.
- * Cython may produce an error message: ``Obtaining char* from a temporary Python value`` and will not resume compiling in situations like this::
-
- cdef char *s
- s = pystring1 + pystring2
-
- * The reason is that concatenating to strings in Python produces a temporary variable.
-
- * The variable is decrefed, and the Python string deallocated as soon as the statement has finished,
-
- * Therefore the lvalue **``s``** is left dangling.
-
- * The solution is to assign the result of the concatenation to a Python variable, and then obtain the ``char*`` from that::
-
- cdef char *s
- p = pystring1 + pystring2
- s = p
-
- .. note::
- **It is up to you to be aware of this, and not to depend on Cython's error message, as it is not guaranteed to be generated for every situation.**
-
+This section was moved to :ref:`type-conversion`.
Type Casting
-=============
-
-* The syntax used in type casting are ``"<"`` and ``">"``
-
- .. note::
- The syntax is different from C convention
-
- ::
-
- cdef char *p, float *q
- p = q
-
-* If one of the types is a python object for ``x``, Cython will try and do a coercion.
-
- .. note:: Cython will not stop a casting where there is no conversion, but it will emit a warning.
-
-* If the address is what is wanted, cast to a ``void*`` first.
-
-
-Type Checking
--------------
-
-* A cast like ``x`` will cast x to type ``MyExtensionType`` without type checking at all.
-
-* To have a cast type checked, use the syntax like: ``x``.
-
- * In this case, Cython will throw an error if ``"x"`` is not a (subclass) of ``MyExtensionType``
+============
-* Automatic type checking for extension types can be obtained whenever ``isinstance()`` is used as the second parameter
+This section was moved to :ref:`type_casting`.
+Checked Type Casts
+------------------
-Python Objects
-==============
+This section was moved to :ref:`checked_type_casts`.
==========================
Statements and Expressions
==========================
-* For the most part, control structures and expressions follow Python syntax.
-* When applied to Python objects, the semantics are the same unless otherwise noted.
-* Most Python operators can be applied to C values with the obvious semantics.
-* An expression with mixed Python and C values will have **conversions** performed automatically.
-* Python operations are automatically checked for errors, with the appropriate action taken.
+This section was moved to :ref:`statements_and_expressions`.
Differences Between Cython and C
================================
-* Most notable are C constructs which have no direct equivalent in Python.
-
- * An integer literal is treated as a C constant
-
- * It will be truncated to whatever size your C compiler thinks appropriate.
- * Cast to a Python object like this::
-
- 10000000000000000000
-
- * The ``"L"``, ``"LL"`` and the ``"U"`` suffixes have the same meaning as in C
-
-* There is no ``->`` operator in Cython.. instead of ``p->x``, use ``p.x``.
-* There is no ``*`` operator in Cython.. instead of ``*p``, use ``p[0]``.
-* ``&`` is permissible and has the same semantics as in C.
-* ``NULL`` is the null C pointer.
-
- * Do NOT use 0.
- * ``NULL`` is a reserved word in Cython
-
-* Syntax for **Type casts** are ``value``.
-
Scope Rules
===========
-* All determination of scoping (local, module, built-in) in Cython is determined statically.
-* As with Python, a variable assignment which is not declared explicitly is implicitly declared to be a Python variable residing in the scope where it was assigned.
-
-.. note::
- * Module-level scope behaves the same way as a Python local scope if you refer to the variable before assigning to it.
-
- * Tricks, like the following will NOT work in Cython::
-
- try:
- x = True
- except NameError:
- True = 1
-
- * The above example will not work because ``True`` will always be looked up in the module-level scope. Do the following instead::
-
- import __builtin__
- try:
- True = __builtin__.True
- except AttributeError:
- True = 1
-
-
Built-in Constants
==================
-Predefined Python built-in constants:
-
-* None
-* True
-* False
-
-
Operator Precedence
===================
-* Cython uses Python precedence order, not C
-
-
For-loops
==========
-The "for ... in iterable" loop works as in Python, but is even more versatile
-in Cython as it can additionally be used on C types.
-
-* ``range()`` is C optimized when the index value has been declared by ``cdef``,
- for example::
-
- cdef size_t i
- for i in range(n):
- ...
-
-* Iteration over C arrays and sliced pointers is supported and automatically
- infers the type of the loop variable, e.g.::
-
- cdef double* data = ...
- for x in data[:10]:
- ...
-
-* Iterating over many builtin types such as lists and tuples is optimized.
-
-* There is also a more verbose C-style for-from syntax which, however, is
- deprecated in favour of the normal Python "for ... in range()" loop. You
- might still find it in legacy code that was written for Pyrex, though.
-
- * The target expression must be a plain variable name.
-
- * The name between the lower and upper bounds must be the same as the target name.
-
- for i from 0 <= i < n:
- ...
-
- * Or when using a step size::
-
- for i from 0 <= i < n by s:
- ...
-
- * To reverse the direction, reverse the conditional operation::
-
- for i from n > i >= 0:
- ...
-
-* The ``break`` and ``continue`` statements are permissible.
-
-* Can contain an else clause.
-
-
=====================
Functions and Methods
=====================
-* There are three types of function declarations in Cython as the sub-sections show below.
-* Only "Python" functions can be called outside a Cython module from *Python interpreted code*.
+This section was moved to :ref:`python_functions_vs_c_functions`.
Callable from Python (def)
==========================
-* Are declared with the ``def`` statement
-* Are called with Python objects
-* Return Python objects
-* See **Parameters** for special consideration
-
-.. _cdef:
-
Callable from C (cdef)
======================
-* Are declared with the ``cdef`` statement.
-* Are called with either Python objects or C values.
-* Can return either Python objects or C values.
-
-.. _cpdef:
-
Callable from both Python and C (cpdef)
=======================================
-* Are declared with the ``cpdef`` statement.
-* Can be called from anywhere, because it uses a little Cython magic.
-* Uses the faster C calling conventions when being called from other Cython code.
-
Overriding
==========
-``cpdef`` methods can override ``cdef`` methods::
-
- cdef class A:
- cdef foo(self):
- print "A"
-
- cdef class B(A)
- cdef foo(self, x=None)
- print "B", x
-
- cdef class C(B):
- cpdef foo(self, x=True, int k=3)
- print "C", x, k
-
-When subclassing an extension type with a Python class,
-``def`` methods can override ``cpdef`` methods but not ``cdef``
-methods::
-
- cdef class A:
- cdef foo(self):
- print("A")
-
- cdef class B(A):
- cpdef foo(self):
- print("B")
-
- class C(B): # NOTE: not cdef class
- def foo(self):
- print("C")
-
-If ``C`` above would be an extension type (``cdef class``),
-this would not work correctly.
-The Cython compiler will give a warning in that case.
-
+This section was moved to :ref:`overriding_in_extension_types`.
Function Pointers
=================
-* Functions declared in a ``struct`` are automatically converted to function pointers.
-* see **using exceptions with function pointers**
-
-
Python Built-ins
================
-Cython compiles calls to most built-in functions into direct calls to
-the corresponding Python/C API routines, making them particularly fast.
+This section was moved to :ref:`built_in_functions`.
+
+Optional Arguments
+==================
-Only direct function calls using these names are optimised. If you do
-something else with one of these names that assumes it's a Python object,
-such as assign it to a Python variable, and later call it, the call will
-be made as a Python function call.
-
-+------------------------------+-------------+----------------------------+
-| Function and arguments | Return type | Python/C API Equivalent |
-+==============================+=============+============================+
-| abs(obj) | object, | PyNumber_Absolute, fabs, |
-| | double, ... | fabsf, ... |
-+------------------------------+-------------+----------------------------+
-| callable(obj) | bint | PyObject_Callable |
-+------------------------------+-------------+----------------------------+
-| delattr(obj, name) | None | PyObject_DelAttr |
-+------------------------------+-------------+----------------------------+
-| exec(code, [glob, [loc]]) | object | - |
-+------------------------------+-------------+----------------------------+
-| dir(obj) | list | PyObject_Dir |
-+------------------------------+-------------+----------------------------+
-| divmod(a, b) | tuple | PyNumber_Divmod |
-+------------------------------+-------------+----------------------------+
-| getattr(obj, name, [default])| object | PyObject_GetAttr |
-| (Note 1) | | |
-+------------------------------+-------------+----------------------------+
-| hasattr(obj, name) | bint | PyObject_HasAttr |
-+------------------------------+-------------+----------------------------+
-| hash(obj) | int / long | PyObject_Hash |
-+------------------------------+-------------+----------------------------+
-| intern(obj) | object | Py*_InternFromString |
-+------------------------------+-------------+----------------------------+
-| isinstance(obj, type) | bint | PyObject_IsInstance |
-+------------------------------+-------------+----------------------------+
-| issubclass(obj, type) | bint | PyObject_IsSubclass |
-+------------------------------+-------------+----------------------------+
-| iter(obj, [sentinel]) | object | PyObject_GetIter |
-+------------------------------+-------------+----------------------------+
-| len(obj) | Py_ssize_t | PyObject_Length |
-+------------------------------+-------------+----------------------------+
-| pow(x, y, [z]) | object | PyNumber_Power |
-+------------------------------+-------------+----------------------------+
-| reload(obj) | object | PyImport_ReloadModule |
-+------------------------------+-------------+----------------------------+
-| repr(obj) | object | PyObject_Repr |
-+------------------------------+-------------+----------------------------+
-| setattr(obj, name) | void | PyObject_SetAttr |
-+------------------------------+-------------+----------------------------+
-
-Note 1: Pyrex originally provided a function :func:`getattr3(obj, name, default)`
-corresponding to the three-argument form of the Python builtin :func:`getattr()`.
-Cython still supports this function, but the usage is deprecated in favour of
-the normal builtin, which Cython can optimise in both forms.
+This section was moved to :ref:`optional_arguments`.
+Keyword-only Arguments
+=======================
+
+This section was moved to :ref:`keyword_only_argument`.
============================
Error and Exception Handling
============================
-* A plain ``cdef`` declared function, that does not return a Python object...
-
- * Has no way of reporting a Python exception to it's caller.
- * Will only print a warning message and the exception is ignored.
-
-* In order to propagate exceptions like this to it's caller, you need to declare an exception value for it.
-* There are three forms of declaring an exception for a C compiled program.
-
- * First::
-
- cdef int spam() except -1:
- ...
-
- * In the example above, if an error occurs inside spam, it will immediately return with the value of ``-1``, causing an exception to be propagated to it's caller.
- * Functions declared with an exception value, should explicitly prevent a return of that value.
-
- * Second::
-
- cdef int spam() except? -1:
- ...
-
- * Used when a ``-1`` may possibly be returned and is not to be considered an error.
- * The ``"?"`` tells Cython that ``-1`` only indicates a *possible* error.
- * Now, each time ``-1`` is returned, Cython generates a call to ``PyErr_Occurred`` to verify it is an actual error.
-
- * Third::
-
- cdef int spam() except *
-
- * A call to ``PyErr_Occurred`` happens *every* time the function gets called.
-
- .. note:: Returning ``void``
-
- A need to propagate errors when returning ``void`` must use this version.
-
-* Exception values can only be declared for functions returning an..
-
- * integer
- * enum
- * float
- * pointer type
- * Must be a constant expression
-
-.. note::
-
- .. note:: Function pointers
-
- * Require the same exception value specification as it's user has declared.
- * Use cases here are when used as parameters and when assigned to a variable::
-
- int (*grail)(int, char *) except -1
-
- .. note:: Python Objects
-
- * Declared exception values are **not** need.
- * Remember that Cython assumes that a function without a declared return value, returns a Python object.
- * Exceptions on such functions are implicitly propagated by returning ``NULL``
-
- .. note:: C++
-
- * For exceptions from C++ compiled programs, see **Wrapping C++ Classes**
+This section was moved to :ref:`error_return_values`.
Checking return values for non-Cython functions..
=================================================
-* Do not try to raise exceptions by returning the specified value.. Example::
-
- cdef extern FILE *fopen(char *filename, char *mode) except NULL # WRONG!
-
- * The except clause does not work that way.
- * It's only purpose is to propagate Python exceptions that have already been raised by either...
-
- * A Cython function
- * A C function that calls Python/C API routines.
-
-* To propagate an exception for these circumstances you need to raise it yourself::
-
- cdef FILE *p
- p = fopen("spam.txt", "r")
- if p == NULL:
- raise SpamError("Couldn't open the spam file")
+This section was moved to :ref:`checking_return_values_of_non_cython_functions`.
=======================
Conditional Compilation
=======================
-* The expressions in the following sub-sections must be valid compile-time expressions.
-* They can evaluate to any Python value.
-* The *truth* of the result is determined in the usual Python way.
+This section was moved to :ref:`conditional_compilation`.
Compile-Time Definitions
=========================
-* Defined using the ``DEF`` statement::
-
- DEF FavouriteFood = "spam"
- DEF ArraySize = 42
- DEF OtherArraySize = 2 * ArraySize + 17
-
-* The right hand side must be a valid compile-time expression made up of either:
-
- * Literal values
- * Names defined by other ``DEF`` statements
-
-* They can be combined using any of the Python expression syntax
-* Cython provides the following predefined names
-
- * Corresponding to the values returned by ``os.uname()``
-
- * UNAME_SYSNAME
- * UNAME_NODENAME
- * UNAME_RELEASE
- * UNAME_VERSION
- * UNAME_MACHINE
-
-* A name defined by ``DEF`` can appear anywhere an identifier can appear.
-* Cython replaces the name with the literal value before compilation.
-
- * The compile-time expression, in this case, must evaluate to a Python value of ``int``, ``long``, ``float``, or ``str``::
-
- cdef int a1[ArraySize]
- cdef int a2[OtherArraySize]
- print "I like", FavouriteFood
-
-
Conditional Statements
=======================
-
-* Similar semantics of the C pre-processor
-* The following statements can be used to conditionally include or exclude sections of code to compile.
-
- * ``IF``
- * ``ELIF``
- * ``ELSE``
-
-::
-
- IF UNAME_SYSNAME == "Windows":
- include "icky_definitions.pxi"
- ELIF UNAME_SYSNAME == "Darwin":
- include "nice_definitions.pxi"
- ELIF UNAME_SYSNAME == "Linux":
- include "penguin_definitions.pxi"
- ELSE:
- include "other_definitions.pxi"
-
-* ``ELIF`` and ``ELSE`` are optional.
-* ``IF`` can appear anywhere that a normal statement or declaration can appear
-* It can contain any statements or declarations that would be valid in that context.
-
- * This includes other ``IF`` and ``DEF`` statements
-
-
-
-.. [#] The conversion is to/from str for Python 2.x, and bytes for Python 3.x.
diff -Nru cython-0.26.1/docs/src/reference/Makefile cython-0.29.14/docs/src/reference/Makefile
--- cython-0.26.1/docs/src/reference/Makefile 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/src/reference/Makefile 1970-01-01 00:00:00.000000000 +0000
@@ -1,68 +0,0 @@
-# Makefile for Sphinx documentation
-#
-
-# You can set these variables from the command line.
-SPHINXOPTS =
-SPHINXBUILD = sphinx-build
-PAPER =
-
-# Internal variables.
-PAPEROPT_a4 = -D latex_paper_size=a4
-PAPEROPT_letter = -D latex_paper_size=letter
-ALLSPHINXOPTS = -d build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
-
-.PHONY: help clean html web htmlhelp latex changes linkcheck
-
-help:
- @echo "Please use \`make ' where is one of"
- @echo " html to make standalone HTML files"
- @echo " web to make files usable by Sphinx.web"
- @echo " htmlhelp to make HTML files and a HTML help project"
- @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
- @echo " changes to make an overview over all changed/added/deprecated items"
- @echo " linkcheck to check all external links for integrity"
-
-clean:
- -rm -rf build/*
-
-html:
- mkdir -p build/html build/doctrees
- $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) build/html
- @echo
- @echo "Build finished. The HTML pages are in build/html."
-
-web:
- mkdir -p build/web build/doctrees
- $(SPHINXBUILD) -b web $(ALLSPHINXOPTS) build/web
- @echo
- @echo "Build finished; now you can run"
- @echo " python -m sphinx.web build/web"
- @echo "to start the server."
-
-htmlhelp:
- mkdir -p build/htmlhelp build/doctrees
- $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) build/htmlhelp
- @echo
- @echo "Build finished; now you can run HTML Help Workshop with the" \
- ".hhp project file in build/htmlhelp."
-
-latex:
- mkdir -p build/latex build/doctrees
- $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) build/latex
- @echo
- @echo "Build finished; the LaTeX files are in build/latex."
- @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
- "run these through (pdf)latex."
-
-changes:
- mkdir -p build/changes build/doctrees
- $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) build/changes
- @echo
- @echo "The overview file is in build/changes."
-
-linkcheck:
- mkdir -p build/linkcheck build/doctrees
- $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) build/linkcheck
- @echo
- @echo "Link check complete; look for any errors in the above output " \
- "or in build/linkcheck/output.txt."
diff -Nru cython-0.26.1/docs/src/reference/special_methods_table.rst cython-0.29.14/docs/src/reference/special_methods_table.rst
--- cython-0.26.1/docs/src/reference/special_methods_table.rst 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/docs/src/reference/special_methods_table.rst 2018-11-24 09:20:06.000000000 +0000
@@ -1,220 +1,32 @@
-.. _special_methods_table:
-
Special Methods Table
---------------------
-This table lists all of the special methods together with their parameter and
-return types. In the table below, a parameter name of self is used to indicate
-that the parameter has the type that the method belongs to. Other parameters
-with no type specified in the table are generic Python objects.
-
-You don't have to declare your method as taking these parameter types. If you
-declare different types, conversions will be performed as necessary.
+You can find an updated version of the special methods table
+in :ref:`special_methods_table`.
General
^^^^^^^
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| Name | Parameters | Return type | Description |
-+=======================+=======================================+=============+=====================================================+
-| __cinit__ |self, ... | | Basic initialisation (no direct Python equivalent) |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __init__ |self, ... | | Further initialisation |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __dealloc__ |self | | Basic deallocation (no direct Python equivalent) |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __cmp__ |x, y | int | 3-way comparison |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __richcmp__ |x, y, int op | object | Rich comparison (no direct Python equivalent) |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __str__ |self | object | str(self) |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __repr__ |self | object | repr(self) |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __hash__ |self | int | Hash function |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __call__ |self, ... | object | self(...) |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __iter__ |self | object | Return iterator for sequence |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __getattr__ |self, name | object | Get attribute |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __getattribute__ |self, name | object | Get attribute, unconditionally |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __setattr__ |self, name, val | | Set attribute |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __delattr__ |self, name | | Delete attribute |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
+Rich comparison operators
+^^^^^^^^^^^^^^^^^^^^^^^^^
Arithmetic operators
^^^^^^^^^^^^^^^^^^^^
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| Name | Parameters | Return type | Description |
-+=======================+=======================================+=============+=====================================================+
-| __add__ | x, y | object | binary `+` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __sub__ | x, y | object | binary `-` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __mul__ | x, y | object | `*` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __div__ | x, y | object | `/` operator for old-style division |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __floordiv__ | x, y | object | `//` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __truediv__ | x, y | object | `/` operator for new-style division |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __mod__ | x, y | object | `%` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __divmod__ | x, y | object | combined div and mod |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __pow__ | x, y, z | object | `**` operator or pow(x, y, z) |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __neg__ | self | object | unary `-` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __pos__ | self | object | unary `+` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __abs__ | self | object | absolute value |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __nonzero__ | self | int | convert to boolean |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __invert__ | self | object | `~` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __lshift__ | x, y | object | `<<` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __rshift__ | x, y | object | `>>` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __and__ | x, y | object | `&` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __or__ | x, y | object | `|` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __xor__ | x, y | object | `^` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-
Numeric conversions
^^^^^^^^^^^^^^^^^^^
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| Name | Parameters | Return type | Description |
-+=======================+=======================================+=============+=====================================================+
-| __int__ | self | object | Convert to integer |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __long__ | self | object | Convert to long integer |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __float__ | self | object | Convert to float |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __oct__ | self | object | Convert to octal |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __hex__ | self | object | Convert to hexadecimal |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __index__ | self | object | Convert to sequence index |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-
In-place arithmetic operators
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| Name | Parameters | Return type | Description |
-+=======================+=======================================+=============+=====================================================+
-| __iadd__ | self, x | object | `+=` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __isub__ | self, x | object | `-=` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __imul__ | self, x | object | `*=` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __idiv__ | self, x | object | `/=` operator for old-style division |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __ifloordiv__ | self, x | object | `//=` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __itruediv__ | self, x | object | `/=` operator for new-style division |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __imod__ | self, x | object | `%=` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __ipow__ | x, y, z | object | `**=` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __ilshift__ | self, x | object | `<<=` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __irshift__ | self, x | object | `>>=` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __iand__ | self, x | object | `&=` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __ior__ | self, x | object | `|=` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __ixor__ | self, x | object | `^=` operator |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-
Sequences and mappings
^^^^^^^^^^^^^^^^^^^^^^
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| Name | Parameters | Return type | Description |
-+=======================+=======================================+=============+=====================================================+
-| __len__ | self int | | len(self) |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __getitem__ | self, x | object | self[x] |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __setitem__ | self, x, y | | self[x] = y |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __delitem__ | self, x | | del self[x] |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __getslice__ | self, Py_ssize_t i, Py_ssize_t j | object | self[i:j] |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __setslice__ | self, Py_ssize_t i, Py_ssize_t j, x | | self[i:j] = x |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __delslice__ | self, Py_ssize_t i, Py_ssize_t j | | del self[i:j] |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __contains__ | self, x | int | x in self |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-
Iterators
^^^^^^^^^
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| Name | Parameters | Return type | Description |
-+=======================+=======================================+=============+=====================================================+
-| __next__ | self | object | Get next item (called next in Python) |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-
Buffer interface
^^^^^^^^^^^^^^^^
-.. note::
- The buffer interface is intended for use by C code and is not directly
- accessible from Python. It is described in the Python/C API Reference Manual
- under sections 6.6 and 10.6.
-
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| Name | Parameters | Return type | Description |
-+=======================+=======================================+=============+=====================================================+
-| __getreadbuffer__ | self, int i, void `**p` | | |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __getwritebuffer__ | self, int i, void `**p` | | |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __getsegcount__ | self, int `*p` | | |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __getcharbuffer__ | self, int i, char `**p` | | |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-
Descriptor objects
^^^^^^^^^^^^^^^^^^
-
-.. note::
- Descriptor objects are part of the support mechanism for new-style
- Python classes. See the discussion of descriptors in the Python documentation.
- See also :PEP:`252`, "Making Types Look More Like Classes", and :PEP:`253`,
- "Subtyping Built-In Types".
-
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| Name | Parameters | Return type | Description |
-+=======================+=======================================+=============+=====================================================+
-| __get__ | self, instance, class | object | Get value of attribute |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __set__ | self, instance, value | | Set value of attribute |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-| __delete__ | self, instance | | Delete attribute |
-+-----------------------+---------------------------------------+-------------+-----------------------------------------------------+
-
-
-
-
-
diff -Nru cython-0.26.1/docs/src/tutorial/appendix.rst cython-0.29.14/docs/src/tutorial/appendix.rst
--- cython-0.26.1/docs/src/tutorial/appendix.rst 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/docs/src/tutorial/appendix.rst 2018-11-24 09:20:06.000000000 +0000
@@ -14,7 +14,7 @@
includes e.g. "c:\\mingw\\bin" (if you installed MinGW to
"c:\\mingw"). The following web-page describes the procedure
in Windows XP (the Vista procedure is similar):
- http://support.microsoft.com/kb/310519
+ https://support.microsoft.com/kb/310519
4. Finally, tell Python to use MinGW as the default compiler
(otherwise it will try for Visual C). If Python is installed to
"c:\\Python27", create a file named
@@ -28,4 +28,4 @@
process smoother is welcomed; it is an unfortunate fact that none of
the regular Cython developers have convenient access to Windows.
-.. [WinInst] http://wiki.cython.org/InstallingOnWindows
+.. [WinInst] https://github.com/cython/cython/wiki/CythonExtensionsOnWindows
diff -Nru cython-0.26.1/docs/src/tutorial/array.rst cython-0.29.14/docs/src/tutorial/array.rst
--- cython-0.26.1/docs/src/tutorial/array.rst 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/docs/src/tutorial/array.rst 2018-11-24 09:20:06.000000000 +0000
@@ -18,41 +18,22 @@
Safe usage with memory views
----------------------------
-::
-
- from cpython cimport array
- import array
- cdef array.array a = array.array('i', [1, 2, 3])
- cdef int[:] ca = a
-
- print ca[0]
+.. literalinclude:: ../../examples/tutorial/array/safe_usage.pyx
NB: the import brings the regular Python array object into the namespace
while the cimport adds functions accessible from Cython.
A Python array is constructed with a type signature and sequence of
initial values. For the possible type signatures, refer to the Python
-documentation for the `array module `_.
+documentation for the `array module `_.
Notice that when a Python array is assigned to a variable typed as
memory view, there will be a slight overhead to construct the memory
view. However, from that point on the variable can be passed to other
-functions without overhead, so long as it is typed::
+functions without overhead, so long as it is typed:
- from cpython cimport array
- import array
- cdef array.array a = array.array('i', [1, 2, 3])
- cdef int[:] ca = a
-
- cdef int overhead(object a):
- cdef int[:] ca = a
- return ca[0]
+.. literalinclude:: ../../examples/tutorial/array/overhead.pyx
- cdef int no_overhead(int[:] ca):
- return ca[0]
-
- print overhead(a) # new memory view will be constructed, overhead
- print no_overhead(ca) # ca is already a memory view, so no overhead
Zero-overhead, unsafe access to raw C pointer
---------------------------------------------
@@ -61,18 +42,7 @@
pointer. There is no type or bounds checking, so be careful to use the
right type and signedness.
-::
-
- from cpython cimport array
- import array
-
- cdef array.array a = array.array('i', [1, 2, 3])
-
- # access underlying pointer:
- print a.data.as_ints[0]
-
- from libc.string cimport memset
- memset(a.data.as_voidptr, 0, len(a) * sizeof(int))
+.. literalinclude:: ../../examples/tutorial/array/unsafe_usage.pyx
Note that any length-changing operation on the array object may invalidate the
pointer.
@@ -85,33 +55,13 @@
and preallocate a given number of elements. The array is initialized to
zero when requested.
-::
-
- from cpython cimport array
- import array
-
- cdef array.array int_array_template = array.array('i', [])
- cdef array.array newarray
-
- # create an array with 3 elements with same type as template
- newarray = array.clone(int_array_template, 3, zero=False)
+.. literalinclude:: ../../examples/tutorial/array/clone.pyx
An array can also be extended and resized; this avoids repeated memory
reallocation which would occur if elements would be appended or removed
one by one.
-::
-
- from cpython cimport array
- import array
-
- cdef array.array a = array.array('i', [1, 2, 3])
- cdef array.array b = array.array('i', [4, 5, 6])
-
- # extend a with b, resize as needed
- array.extend(a, b)
- # resize a, leaving just original three elements
- array.resize(a, len(a) - len(b))
+.. literalinclude:: ../../examples/tutorial/array/resize.pyx
API reference
diff -Nru cython-0.26.1/docs/src/tutorial/cdef_classes.rst cython-0.29.14/docs/src/tutorial/cdef_classes.rst
--- cython-0.26.1/docs/src/tutorial/cdef_classes.rst 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/docs/src/tutorial/cdef_classes.rst 2018-11-24 09:20:06.000000000 +0000
@@ -2,15 +2,9 @@
===================================
To support object-oriented programming, Cython supports writing normal
-Python classes exactly as in Python::
+Python classes exactly as in Python:
- class MathFunction(object):
- def __init__(self, name, operator):
- self.name = name
- self.operator = operator
-
- def __call__(self, *operands):
- return self.operator(*operands)
+.. literalinclude:: ../../examples/tutorial/cdef_classes/math_function.py
Based on what Python calls a "built-in type", however, Cython supports
a second kind of class: *extension types*, sometimes referred to as
@@ -33,38 +27,29 @@
So far our integration example has not been very useful as it only
integrates a single hard-coded function. In order to remedy this,
with hardly sacrificing speed, we will use a cdef class to represent a
-function on floating point numbers::
+function on floating point numbers:
- cdef class Function:
- cpdef double evaluate(self, double x) except *:
- return 0
+.. literalinclude:: ../../examples/tutorial/cdef_classes/math_function_2.pyx
The directive cpdef makes two versions of the method available; one
-fast for use from Cython and one slower for use from Python. Then::
+fast for use from Cython and one slower for use from Python. Then:
- cdef class SinOfSquareFunction(Function):
- cpdef double evaluate(self, double x) except *:
- return sin(x**2)
+.. literalinclude:: ../../examples/tutorial/cdef_classes/sin_of_square.pyx
This does slightly more than providing a python wrapper for a cdef
-method: unlike a cdef method, a cpdef method is fully overrideable by
+method: unlike a cdef method, a cpdef method is fully overridable by
methods and instance attributes in Python subclasses. It adds a
little calling overhead compared to a cdef method.
-Using this, we can now change our integration example::
+To make the class definitions visible to other modules, and thus allow for
+efficient C-level usage and inheritance outside of the module that
+implements them, we define them in a :file:`sin_of_square.pxd` file:
+
+.. literalinclude:: ../../examples/tutorial/cdef_classes/sin_of_square.pxd
- def integrate(Function f, double a, double b, int N):
- cdef int i
- cdef double s, dx
- if f is None:
- raise ValueError("f cannot be None")
- s = 0
- dx = (b-a)/N
- for i in range(N):
- s += f.evaluate(a+i*dx)
- return s * dx
+Using this, we can now change our integration example:
- print(integrate(SinOfSquareFunction(), 0, 1, 10000))
+.. literalinclude:: ../../examples/tutorial/cdef_classes/integrate.pyx
This is almost as fast as the previous code, however it is much more flexible
as the function to integrate can be changed. We can even pass in a new
@@ -103,26 +88,9 @@
There is a *compiler directive* ``nonecheck`` which turns on checks
for this, at the cost of decreased speed. Here's how compiler directives
-are used to dynamically switch on or off ``nonecheck``::
-
- #cython: nonecheck=True
- # ^^^ Turns on nonecheck globally
-
- import cython
-
- # Turn off nonecheck locally for the function
- @cython.nonecheck(False)
- def func():
- cdef MyClass obj = None
- try:
- # Turn nonecheck on again for a block
- with cython.nonecheck(True):
- print obj.myfunc() # Raises exception
- except AttributeError:
- pass
- print obj.myfunc() # Hope for a crash!
-
+are used to dynamically switch on or off ``nonecheck``:
+.. literalinclude:: ../../examples/tutorial/cdef_classes/nonecheck.pyx
Attributes in cdef classes behave differently from attributes in regular classes:
@@ -130,18 +98,4 @@
- Attributes are by default only accessible from Cython (typed access)
- Properties can be declared to expose dynamic attributes to Python-space
-::
-
- cdef class WaveFunction(Function):
- # Not available in Python-space:
- cdef double offset
- # Available in Python-space:
- cdef public double freq
- # Available in Python-space:
- @property
- def period(self):
- return 1.0 / self.freq
- @period.setter
- def period(self, value):
- self.freq = 1.0 / value
- <...>
+.. literalinclude:: ../../examples/tutorial/cdef_classes/wave_function.pyx
diff -Nru cython-0.26.1/docs/src/tutorial/clibraries.rst cython-0.29.14/docs/src/tutorial/clibraries.rst
--- cython-0.26.1/docs/src/tutorial/clibraries.rst 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/src/tutorial/clibraries.rst 2018-11-24 09:20:06.000000000 +0000
@@ -1,5 +1,9 @@
+
+.. _using_c_libraries:
+
+******************
Using C libraries
-=================
+******************
Apart from writing fast code, one of the main use cases of Cython is
to call external C libraries from Python code. As Cython code
@@ -24,51 +28,20 @@
Defining external declarations
-------------------------------
-
-The C API of the queue implementation, which is defined in the header
-file ``libcalg/queue.h``, essentially looks like this::
-
- /* file: queue.h */
-
- typedef struct _Queue Queue;
- typedef void *QueueValue;
+==============================
- Queue *queue_new(void);
- void queue_free(Queue *queue);
+You can download CAlg `here `_.
- int queue_push_head(Queue *queue, QueueValue data);
- QueueValue queue_pop_head(Queue *queue);
- QueueValue queue_peek_head(Queue *queue);
-
- int queue_push_tail(Queue *queue, QueueValue data);
- QueueValue queue_pop_tail(Queue *queue);
- QueueValue queue_peek_tail(Queue *queue);
+The C API of the queue implementation, which is defined in the header
+file ``c-algorithms/src/queue.h``, essentially looks like this:
- int queue_is_empty(Queue *queue);
+.. literalinclude:: ../../examples/tutorial/clibraries/c-algorithms/src/queue.h
+ :language: C
To get started, the first step is to redefine the C API in a ``.pxd``
-file, say, ``cqueue.pxd``::
-
- # file: cqueue.pxd
-
- cdef extern from "libcalg/queue.h":
- ctypedef struct Queue:
- pass
- ctypedef void* QueueValue
-
- Queue* queue_new()
- void queue_free(Queue* queue)
-
- int queue_push_head(Queue* queue, QueueValue data)
- QueueValue queue_pop_head(Queue* queue)
- QueueValue queue_peek_head(Queue* queue)
+file, say, ``cqueue.pxd``:
- int queue_push_tail(Queue* queue, QueueValue data)
- QueueValue queue_pop_tail(Queue* queue)
- QueueValue queue_peek_tail(Queue* queue)
-
- bint queue_is_empty(Queue* queue)
+.. literalinclude:: ../../examples/tutorial/clibraries/cqueue.pxd
Note how these declarations are almost identical to the header file
declarations, so you can often just copy them over. However, you do
@@ -123,7 +96,7 @@
Writing a wrapper class
------------------------
+=======================
After declaring our C library's API, we can start to design the Queue
class that should wrap the C queue. It will live in a file called
@@ -138,16 +111,9 @@
library, there must not be a ``.pyx`` file with the same name
that Cython associates with it.
-Here is a first start for the Queue class::
-
- # file: queue.pyx
-
- cimport cqueue
+Here is a first start for the Queue class:
- cdef class Queue:
- cdef cqueue.Queue* _c_queue
- def __cinit__(self):
- self._c_queue = cqueue.queue_new()
+.. literalinclude:: ../../examples/tutorial/clibraries/queue.pyx
Note that it says ``__cinit__`` rather than ``__init__``. While
``__init__`` is available as well, it is not guaranteed to be run (for
@@ -172,7 +138,7 @@
Memory management
------------------
+=================
Before we continue implementing the other methods, it is important to
understand that the above implementation is not safe. In case
@@ -184,22 +150,15 @@
pointer to the new queue.
The Python way to get out of this is to raise a ``MemoryError`` [#]_.
-We can thus change the init function as follows::
+We can thus change the init function as follows:
- cimport cqueue
-
- cdef class Queue:
- cdef cqueue.Queue* _c_queue
- def __cinit__(self):
- self._c_queue = cqueue.queue_new()
- if self._c_queue is NULL:
- raise MemoryError()
+.. literalinclude:: ../../examples/tutorial/clibraries/queue2.pyx
.. [#] In the specific case of a ``MemoryError``, creating a new
exception instance in order to raise it may actually fail because
we are running out of memory. Luckily, CPython provides a C-API
function ``PyErr_NoMemory()`` that safely raises the right
- exception for us. Since version 0.14.1, Cython automatically
+ exception for us. Cython automatically
substitutes this C-API call whenever you write ``raise
MemoryError`` or ``raise MemoryError()``. If you use an older
version, you have to cimport the C-API function from the standard
@@ -218,7 +177,7 @@
Compiling and linking
----------------------
+=====================
At this point, we have a working Cython module that we can test. To
compile it, we need to configure a ``setup.py`` script for distutils.
@@ -232,10 +191,76 @@
ext_modules = cythonize([Extension("queue", ["queue.pyx"])])
)
-To build against the external C library, we must extend this script to
-include the necessary setup. Assuming the library is installed in the
-usual places (e.g. under ``/usr/lib`` and ``/usr/include`` on a
-Unix-like system), we could simply change the extension setup from
+
+To build against the external C library, we need to make sure Cython finds the necessary libraries.
+There are two ways to archive this. First we can tell distutils where to find
+the c-source to compile the :file:`queue.c` implementation automatically. Alternatively,
+we can build and install C-Alg as system library and dynamically link it. The latter is useful
+if other applications also use C-Alg.
+
+
+Static Linking
+---------------
+
+To build the c-code automatically we need to include compiler directives in `queue.pyx`::
+
+ # distutils: sources = c-algorithms/src/queue.c
+ # distutils: include_dirs = c-algorithms/src/
+
+ cimport cqueue
+
+ cdef class Queue:
+ cdef cqueue.Queue* _c_queue
+ def __cinit__(self):
+ self._c_queue = cqueue.queue_new()
+ if self._c_queue is NULL:
+ raise MemoryError()
+
+ def __dealloc__(self):
+ if self._c_queue is not NULL:
+ cqueue.queue_free(self._c_queue)
+
+The ``sources`` compiler directive gives the path of the C
+files that distutils is going to compile and
+link (statically) into the resulting extension module.
+In general all relevant header files should be found in ``include_dirs``.
+Now we can build the project using::
+
+ $ python setup.py build_ext -i
+
+And test whether our build was successful::
+
+ $ python -c 'import queue; Q = queue.Queue()'
+
+
+Dynamic Linking
+---------------
+
+Dynamic linking is useful, if the library we are going to wrap is already
+installed on the system. To perform dynamic linking we first need to
+build and install c-alg.
+
+To build c-algorithms on your system::
+
+ $ cd c-algorithms
+ $ sh autogen.sh
+ $ ./configure
+ $ make
+
+to install CAlg run::
+
+ $ make install
+
+Afterwards the file :file:`/usr/local/lib/libcalg.so` should exist.
+
+.. note::
+
+ This path applies to Linux systems and may be different on other platforms,
+ so you will need to adapt the rest of the tutorial depending on the path
+ where ``libcalg.so`` or ``libcalg.dll`` is on your system.
+
+In this approach we need to tell the setup script to link with an external library.
+To do so we need to extend the setup script to install change the extension setup from
::
@@ -250,7 +275,11 @@
libraries=["calg"])
])
-If it is not installed in a 'normal' location, users can provide the
+Now we should be able to build the project using::
+
+ $ python setup.py build_ext -i
+
+If the `libcalg` is not installed in a 'normal' location, users can provide the
required parameters externally by passing appropriate C compiler
flags, such as::
@@ -258,11 +287,18 @@
LDFLAGS="-L/usr/local/otherdir/calg/lib" \
python setup.py build_ext -i
+
+
+Before we run the module, we also need to make sure that `libcalg` is in
+the `LD_LIBRARY_PATH` environment variable, e.g. by setting::
+
+ $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
+
Once we have compiled the module for the first time, we can now import
it and instantiate a new Queue::
$ export PYTHONPATH=.
- $ python -c 'import queue.Queue as Q ; Q()'
+ $ python -c 'import queue; Q = queue.Queue()'
However, this is all our Queue class can do so far, so let's make it
more usable.
@@ -306,24 +342,31 @@
cdef extend(self, int* values, size_t count):
"""Append all ints to the queue.
"""
- cdef size_t i
- for i in range(count):
- if not cqueue.queue_push_tail(
- self._c_queue, values[i]):
- raise MemoryError()
+ cdef int value
+ for value in values[:count]: # Slicing pointer to limit the iteration boundaries.
+ self.append(value)
-This becomes handy when reading values from a NumPy array, for
-example.
+This becomes handy when reading values from a C array, for example.
So far, we can only add data to the queue. The next step is to write
the two methods to get the first element: ``peek()`` and ``pop()``,
-which provide read-only and destructive read access respectively::
+which provide read-only and destructive read access respectively.
+To avoid compiler warnings when casting ``void*`` to ``int`` directly,
+we use an intermediate data type that is big enough to hold a ``void*``.
+Here, ``Py_ssize_t``::
cdef int peek(self):
- return cqueue.queue_peek_head(self._c_queue)
+ return cqueue.queue_peek_head(self._c_queue)
cdef int pop(self):
- return cqueue.queue_pop_head(self._c_queue)
+ return cqueue.queue_pop_head(self._c_queue)
+
+Normally, in C, we risk losing data when we convert a larger integer type
+to a smaller integer type without checking the boundaries, and ``Py_ssize_t``
+may be a larger type than ``int``. But since we control how values are added
+to the queue, we already know that all values that are in the queue fit into
+an ``int``, so the above conversion from ``void*`` to ``Py_ssize_t`` to ``int``
+(the return type) is safe by design.
Handling errors
@@ -331,16 +374,16 @@
Now, what happens when the queue is empty? According to the
documentation, the functions return a ``NULL`` pointer, which is
-typically not a valid value. Since we are simply casting to and
+typically not a valid value. But since we are simply casting to and
from ints, we cannot distinguish anymore if the return value was
``NULL`` because the queue was empty or because the value stored in
-the queue was ``0``. However, in Cython code, we would expect the
-first case to raise an exception, whereas the second case should
-simply return ``0``. To deal with this, we need to special case this
-value, and check if the queue really is empty or not::
+the queue was ``0``. In Cython code, we want the first case to
+raise an exception, whereas the second case should simply return
+``0``. To deal with this, we need to special case this value,
+and check if the queue really is empty or not::
cdef int peek(self) except? -1:
- value = cqueue.queue_peek_head(self._c_queue)
+ cdef int value = cqueue.queue_peek_head(self._c_queue)
if value == 0:
# this may mean that the queue is empty, or
# that it happens to contain a 0 value
@@ -392,7 +435,7 @@
cdef int pop(self) except? -1:
if cqueue.queue_is_empty(self._c_queue):
raise IndexError("Queue is empty")
- return cqueue.queue_pop_head(self._c_queue)
+ return cqueue.queue_pop_head(self._c_queue)
The return value for exception propagation is declared exactly as for
``peek()``.
@@ -430,77 +473,28 @@
methods even when they are called from Cython. This adds a tiny overhead
compared to ``cdef`` methods.
-The following listing shows the complete implementation that uses
-``cpdef`` methods where possible::
-
- cimport cqueue
-
- cdef class Queue:
- """A queue class for C integer values.
-
- >>> q = Queue()
- >>> q.append(5)
- >>> q.peek()
- 5
- >>> q.pop()
- 5
- """
- cdef cqueue.Queue* _c_queue
- def __cinit__(self):
- self._c_queue = cqueue.queue_new()
- if self._c_queue is NULL:
- raise MemoryError()
-
- def __dealloc__(self):
- if self._c_queue is not NULL:
- cqueue.queue_free(self._c_queue)
-
- cpdef append(self, int value):
- if not cqueue.queue_push_tail(self._c_queue,
- value):
- raise MemoryError()
+Now that we have both a C-interface and a Python interface for our
+class, we should make sure that both interfaces are consistent.
+Python users would expect an ``extend()`` method that accepts arbitrary
+iterables, whereas C users would like to have one that allows passing
+C arrays and C memory. Both signatures are incompatible.
+
+We will solve this issue by considering that in C, the API could also
+want to support other input types, e.g. arrays of ``long`` or ``char``,
+which is usually supported with differently named C API functions such as
+``extend_ints()``, ``extend_longs()``, extend_chars()``, etc. This allows
+us to free the method name ``extend()`` for the duck typed Python method,
+which can accept arbitrary iterables.
- cdef extend(self, int* values, size_t count):
- cdef size_t i
- for i in xrange(count):
- if not cqueue.queue_push_tail(
- self._c_queue, values[i]):
- raise MemoryError()
-
- cpdef int peek(self) except? -1:
- cdef int value = \
- cqueue.queue_peek_head(self._c_queue)
- if value == 0:
- # this may mean that the queue is empty,
- # or that it happens to contain a 0 value
- if cqueue.queue_is_empty(self._c_queue):
- raise IndexError("Queue is empty")
- return value
+The following listing shows the complete implementation that uses
+``cpdef`` methods where possible:
- cpdef int pop(self) except? -1:
- if cqueue.queue_is_empty(self._c_queue):
- raise IndexError("Queue is empty")
- return cqueue.queue_pop_head(self._c_queue)
+.. literalinclude:: ../../examples/tutorial/clibraries/queue3.pyx
- def __bool__(self):
- return not cqueue.queue_is_empty(self._c_queue)
+Now we can test our Queue implementation using a python script,
+for example here :file:`test_queue.py`:
-The ``cpdef`` feature is obviously not available for the ``extend()``
-method, as the method signature is incompatible with Python argument
-types. However, if wanted, we can rename the C-ish ``extend()``
-method to e.g. ``c_extend()``, and write a new ``extend()`` method
-instead that accepts an arbitrary Python iterable::
-
- cdef c_extend(self, int* values, size_t count):
- cdef size_t i
- for i in range(count):
- if not cqueue.queue_push_tail(
- self._c_queue, values[i]):
- raise MemoryError()
-
- cpdef extend(self, values):
- for value in values:
- self.append(value)
+.. literalinclude:: ../../examples/tutorial/clibraries/test_queue.py
As a quick test with 10000 numbers on the author's machine indicates,
using this Queue from Cython code with C ``int`` values is about five
diff -Nru cython-0.26.1/docs/src/tutorial/cython_tutorial.rst cython-0.29.14/docs/src/tutorial/cython_tutorial.rst
--- cython-0.26.1/docs/src/tutorial/cython_tutorial.rst 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/src/tutorial/cython_tutorial.rst 2018-11-24 09:20:06.000000000 +0000
@@ -34,7 +34,7 @@
So lets start with the canonical python hello world::
- print "Hello World"
+ print("Hello World")
Save this code in a file named :file:`helloworld.pyx`. Now we need to create
the :file:`setup.py`, which is like a python Makefile (for more information
@@ -64,20 +64,20 @@
this example doesn't really give a feeling why one would ever want to use Cython, so
lets create a more realistic example.
-:mod:`pyximport`: Cython Compilation the Easy Way
-==================================================
+:mod:`pyximport`: Cython Compilation for Developers
+---------------------------------------------------
If your module doesn't require any extra C libraries or a special
-build setup, then you can use the pyximport module by Paul Prescod and
-Stefan Behnel to load .pyx files directly on import, without having to
-write a :file:`setup.py` file. It is shipped and installed with
-Cython and can be used like this::
+build setup, then you can use the pyximport module, originally developed
+by Paul Prescod, to load .pyx files directly on import, without having
+to run your :file:`setup.py` file each time you change your code.
+It is shipped and installed with Cython and can be used like this::
>>> import pyximport; pyximport.install()
>>> import helloworld
Hello World
-Since Cython 0.11, the :mod:`pyximport` module also has experimental
+The :ref:`Pyximport` module also has experimental
compilation support for normal Python modules. This allows you to
automatically run Cython on every .pyx and .py module that Python
imports, including the standard library and installed packages.
@@ -85,14 +85,19 @@
case the import mechanism will fall back to loading the Python source
modules instead. The .py import mechanism is installed like this::
- >>> pyximport.install(pyimport = True)
+ >>> pyximport.install(pyimport=True)
+
+Note that it is not recommended to let :ref:`Pyximport` build code
+on end user side as it hooks into their import system. The best way
+to cater for end users is to provide pre-built binary packages in the
+`wheel `_ packaging format.
Fibonacci Fun
==============
From the official Python tutorial a simple fibonacci function is defined as:
-.. literalinclude:: ../../examples/tutorial/fib1/fib.pyx
+.. literalinclude:: ../../examples/tutorial/cython_tutorial/fib.pyx
Now following the steps for the Hello World example we first rename the file
to have a `.pyx` extension, lets say :file:`fib.pyx`, then we create the
@@ -100,7 +105,7 @@
that you need to change is the name of the Cython filename, and the resulting
module name, doing this we have:
-.. literalinclude:: ../../examples/tutorial/fib1/setup.py
+.. literalinclude:: ../../examples/tutorial/cython_tutorial/setup.py
Build the extension with the same command used for the helloworld.pyx:
@@ -114,6 +119,8 @@
>>> fib.fib(2000)
1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597
+.. _primes:
+
Primes
=======
@@ -123,31 +130,102 @@
:file:`primes.pyx`:
-.. literalinclude:: ../../examples/tutorial/primes/primes.pyx
+.. literalinclude:: ../../examples/tutorial/cython_tutorial/primes.pyx
:linenos:
You'll see that it starts out just like a normal Python function definition,
-except that the parameter ``kmax`` is declared to be of type ``int`` . This
+except that the parameter ``nb_primes`` is declared to be of type ``int`` . This
means that the object passed will be converted to a C integer (or a
``TypeError.`` will be raised if it can't be).
+Now, let's dig into the core of the function::
+
+ cdef int n, i, len_p
+ cdef int p[1000]
+
Lines 2 and 3 use the ``cdef`` statement to define some local C variables.
-Line 4 creates a Python list which will be used to return the result. You'll
-notice that this is done exactly the same way it would be in Python. Because
-the variable result hasn't been given a type, it is assumed to hold a Python
-object.
+The result is stored in the C array ``p`` during processing,
+and will be copied into a Python list at the end (line 22).
+
+.. NOTE:: You cannot create very large arrays in this manner, because
+ they are allocated on the C function call stack, which is a
+ rather precious and scarce resource.
+ To request larger arrays,
+ or even arrays with a length only known at runtime,
+ you can learn how to make efficient use of
+ :ref:`C memory allocation `,
+ :ref:`Python arrays `
+ or :ref:`NumPy arrays ` with Cython.
+
+::
+
+ if nb_primes > 1000:
+ nb_primes = 1000
+
+As in C, declaring a static array requires knowing the size at compile time.
+We make sure the user doesn't set a value above 1000 (or we would have a
+segmentation fault, just like in C). ::
+
+ len_p = 0 # The number of elements in p
+ n = 2
+ while len_p < nb_primes:
Lines 7-9 set up for a loop which will test candidate numbers for primeness
-until the required number of primes has been found. Lines 11-12, which try
-dividing a candidate by all the primes found so far, are of particular
-interest. Because no Python objects are referred to, the loop is translated
-entirely into C code, and thus runs very fast.
-
-When a prime is found, lines 14-15 add it to the p array for fast access by
-the testing loop, and line 16 adds it to the result list. Again, you'll notice
-that line 16 looks very much like a Python statement, and in fact it is, with
-the twist that the C parameter ``n`` is automatically converted to a Python
-object before being passed to the append method. Finally, at line 18, a normal
+until the required number of primes has been found. ::
+
+ # Is n prime?
+ for i in p[:len_p]:
+ if n % i == 0:
+ break
+
+Lines 11-12, which try dividing a candidate by all the primes found so far,
+are of particular interest. Because no Python objects are referred to,
+the loop is translated entirely into C code, and thus runs very fast.
+You will notice the way we iterate over the ``p`` C array. ::
+
+ for i in p[:len_p]:
+
+The loop gets translated into a fast C loop and works just like iterating
+over a Python list or NumPy array. If you don't slice the C array with
+``[:len_p]``, then Cython will loop over the 1000 elements of the array.
+
+::
+
+ # If no break occurred in the loop
+ else:
+ p[len_p] = n
+ len_p += 1
+ n += 1
+
+If no breaks occurred, it means that we found a prime, and the block of code
+after the ``else`` line 16 will be executed. We add the prime found to ``p``.
+If you find having an ``else`` after a for-loop strange, just know that it's a
+lesser known features of the Python language, and that Cython executes it at
+C speed for you.
+If the for-else syntax confuses you, see this excellent
+`blog post `_.
+
+::
+
+ # Let's put the result in a python list:
+ result_as_list = [prime for prime in p[:len_p]]
+ return result_as_list
+
+In line 22, before returning the result, we need to copy our C array into a
+Python list, because Python can't read C arrays. Cython can automatically
+convert many C types from and to Python types, as described in the
+documentation on :ref:`type conversion `, so we can use
+a simple list comprehension here to copy the C ``int`` values into a Python
+list of Python ``int`` objects, which Cython creates automatically along the way.
+You could also have iterated manually over the C array and used
+``result_as_list.append(prime)``, the result would have been the same.
+
+You'll notice we declare a Python list exactly the same way it would be in Python.
+Because the variable ``result_as_list`` hasn't been explicitly declared with a type,
+it is assumed to hold a Python object, and from the assignment, Cython also knows
+that the exact type is a Python list.
+
+Finally, at line 18, a normal
Python return statement returns the result list.
Compiling primes.pyx with the Cython compiler produces an extension module
@@ -160,10 +238,123 @@
See, it works! And if you're curious about how much work Cython has saved you,
take a look at the C code generated for this module.
+
+Cython has a way to visualise where interaction with Python objects and
+Python's C-API is taking place. For this, pass the
+``annotate=True`` parameter to ``cythonize()``. It produces a HTML file. Let's see:
+
+.. figure:: htmlreport.png
+
+If a line is white, it means that the code generated doesn't interact
+with Python, so will run as fast as normal C code. The darker the yellow, the more
+Python interaction there is in that line. Those yellow lines will usually operate
+on Python objects, raise exceptions, or do other kinds of higher-level operations
+than what can easily be translated into simple and fast C code.
+The function declaration and return use the Python interpreter so it makes
+sense for those lines to be yellow. Same for the list comprehension because
+it involves the creation of a Python object. But the line ``if n % i == 0:``, why?
+We can examine the generated C code to understand:
+
+.. figure:: python_division.png
+
+We can see that some checks happen. Because Cython defaults to the
+Python behavior, the language will perform division checks at runtime,
+just like Python does. You can deactivate those checks by using the
+:ref:`compiler directives`.
+
+Now let's see if, even if we have division checks, we obtained a boost in speed.
+Let's write the same program, but Python-style:
+
+.. literalinclude:: ../../examples/tutorial/cython_tutorial/primes_python.py
+
+It is also possible to take a plain ``.py`` file and to compile it with Cython.
+Let's take ``primes_python``, change the function name to ``primes_python_compiled`` and
+compile it with Cython (without changing the code). We will also change the name of the
+file to ``example_py_cy.py`` to differentiate it from the others.
+Now the ``setup.py`` looks like this::
+
+ from distutils.core import setup
+ from Cython.Build import cythonize
+
+ setup(
+ ext_modules=cythonize(['example.pyx', # Cython code file with primes() function
+ 'example_py_cy.py'], # Python code file with primes_python_compiled() function
+ annotate=True), # enables generation of the html annotation file
+ )
+
+Now we can ensure that those two programs output the same values::
+
+ >>> primes_python(1000) == primes(1000)
+ True
+ >>> primes_python_compiled(1000) == primes(1000)
+ True
+
+It's possible to compare the speed now::
+
+ python -m timeit -s 'from example_py import primes_python' 'primes_python(1000)'
+ 10 loops, best of 3: 23 msec per loop
+
+ python -m timeit -s 'from example_py_cy import primes_python_compiled' 'primes_python_compiled(1000)'
+ 100 loops, best of 3: 11.9 msec per loop
+
+ python -m timeit -s 'from example import primes' 'primes(1000)'
+ 1000 loops, best of 3: 1.65 msec per loop
+
+The cythonize version of ``primes_python`` is 2 times faster than the Python one,
+without changing a single line of code.
+The Cython version is 13 times faster than the Python version! What could explain this?
+
+Multiple things:
+ * In this program, very little computation happen at each line.
+ So the overhead of the python interpreter is very important. It would be
+ very different if you were to do a lot computation at each line. Using NumPy for
+ example.
+ * Data locality. It's likely that a lot more can fit in CPU cache when using C than
+ when using Python. Because everything in python is an object, and every object is
+ implemented as a dictionary, this is not very cache friendly.
+
+Usually the speedups are between 2x to 1000x. It depends on how much you call
+the Python interpreter. As always, remember to profile before adding types
+everywhere. Adding types makes your code less readable, so use them with
+moderation.
+
+
+Primes with C++
+===============
+
+With Cython, it is also possible to take advantage of the C++ language, notably,
+part of the C++ standard library is directly importable from Cython code.
+
+Let's see what our :file:`primes.pyx` becomes when
+using `vector `_ from the C++
+standard library.
+
+.. note::
+
+ Vector in C++ is a data structure which implements a list or stack based
+ on a resizeable C array. It is similar to the Python ``array``
+ type in the ``array`` standard library module.
+ There is a method `reserve` available which will avoid copies if you know in advance
+ how many elements you are going to put in the vector. For more details
+ see `this page from cppreference `_.
+
+.. literalinclude:: ../../examples/tutorial/cython_tutorial/primes_cpp.pyx
+ :linenos:
+
+The first line is a compiler directive. It tells Cython to compile your code to C++.
+This will enable the use of C++ language features and the C++ standard library.
+Note that it isn't possible to compile Cython code to C++ with `pyximport`. You
+should use a :file:`setup.py` or a notebook to run this example.
+
+You can see that the API of a vector is similar to the API of a Python list,
+and can sometimes be used as a drop-in replacement in Cython.
+
+For more details about using C++ with Cython, see :ref:`wrapping-cplusplus`.
+
Language Details
================
For more about the Cython language, see :ref:`language-basics`.
To dive right in to using Cython in a numerical computation context,
-see :ref:`numpy_tutorial`.
+see :ref:`memoryviews`.
diff -Nru cython-0.26.1/docs/src/tutorial/external.rst cython-0.29.14/docs/src/tutorial/external.rst
--- cython-0.26.1/docs/src/tutorial/external.rst 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/src/tutorial/external.rst 2018-11-24 09:20:06.000000000 +0000
@@ -13,13 +13,9 @@
For example, let's say you need a low-level way to parse a number from
a ``char*`` value. You could use the ``atoi()`` function, as defined
-by the ``stdlib.h`` header file. This can be done as follows::
+by the ``stdlib.h`` header file. This can be done as follows:
- from libc.stdlib cimport atoi
-
- cdef parse_charptr_to_py_int(char* s):
- assert s is not NULL, "byte string value is NULL"
- return atoi(s) # note: atoi() has no error detection!
+.. literalinclude:: ../../examples/tutorial/external/atoi.pyx
You can find a complete list of these standard cimport files in
Cython's source package
@@ -30,19 +26,13 @@
Cython also has a complete set of declarations for CPython's C-API.
For example, to test at C compilation time which CPython version
-your code is being compiled with, you can do this::
-
- from cpython.version cimport PY_VERSION_HEX
+your code is being compiled with, you can do this:
- # Python version >= 3.2 final ?
- print PY_VERSION_HEX >= 0x030200F0
+.. literalinclude:: ../../examples/tutorial/external/py_version_hex.pyx
-Cython also provides declarations for the C math library::
+Cython also provides declarations for the C math library:
- from libc.math cimport sin
-
- cdef double f(double x):
- return sin(x*x)
+.. literalinclude:: ../../examples/tutorial/external/libc_sin.pyx
Dynamic linking
@@ -52,24 +42,9 @@
on some Unix-like systems, such as Linux. In addition to cimporting the
declarations, you must configure your build system to link against the
shared library ``m``. For distutils, it is enough to add it to the
-``libraries`` parameter of the ``Extension()`` setup::
-
- from distutils.core import setup
- from distutils.extension import Extension
- from Cython.Build import cythonize
-
- ext_modules=[
- Extension("demo",
- sources=["demo.pyx"],
- libraries=["m"] # Unix-like specific
- )
- ]
-
- setup(
- name = "Demos",
- ext_modules = cythonize(ext_modules)
- )
+``libraries`` parameter of the ``Extension()`` setup:
+.. literalinclude:: ../../examples/tutorial/external/setup.py
External declarations
---------------------
@@ -95,15 +70,9 @@
Note that you can easily export an external C function from your Cython
module by declaring it as ``cpdef``. This generates a Python wrapper
for it and adds it to the module dict. Here is a Cython module that
-provides direct access to the C ``sin()`` function for Python code::
+provides direct access to the C ``sin()`` function for Python code:
- """
- >>> sin(0)
- 0.0
- """
-
- cdef extern from "math.h":
- cpdef double sin(double x)
+.. literalinclude:: ../../examples/tutorial/external/cpdef_sin.pyx
You get the same result when this declaration appears in the ``.pxd``
file that belongs to the Cython module (i.e. that has the same name,
@@ -123,20 +92,16 @@
char* strstr(const char*, const char*)
However, this prevents Cython code from calling it with keyword
-arguments (supported since Cython 0.19). It is therefore preferable
-to write the declaration like this instead::
+arguments. It is therefore preferable
+to write the declaration like this instead:
- cdef extern from "string.h":
- char* strstr(const char *haystack, const char *needle)
+.. literalinclude:: ../../examples/tutorial/external/keyword_args.pyx
You can now make it clear which of the two arguments does what in
your call, thus avoiding any ambiguities and often making your code
-more readable::
-
- cdef char* data = "hfvcakdfagbcffvschvxcdfgccbcfhvgcsnfxjh"
+more readable:
- pos = strstr(needle='akd', haystack=data)
- print pos != NULL
+.. literalinclude:: ../../examples/tutorial/external/keyword_args_call.pyx
Note that changing existing parameter names later is a backwards
incompatible API modification, just as for Python code. Thus, if
Binary files /tmp/tmp0lrW9P/aTeTJbw7H9/cython-0.26.1/docs/src/tutorial/htmlreport.png and /tmp/tmp0lrW9P/hdCxpT7ujz/cython-0.29.14/docs/src/tutorial/htmlreport.png differ
diff -Nru cython-0.26.1/docs/src/tutorial/memory_allocation.rst cython-0.29.14/docs/src/tutorial/memory_allocation.rst
--- cython-0.26.1/docs/src/tutorial/memory_allocation.rst 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/src/tutorial/memory_allocation.rst 2018-11-24 09:20:06.000000000 +0000
@@ -32,27 +32,10 @@
void* realloc(void* ptr, size_t size)
void free(void* ptr)
-A very simple example of malloc usage is the following::
+A very simple example of malloc usage is the following:
- import random
- from libc.stdlib cimport malloc, free
-
- def random_noise(int number=1):
- cdef int i
- # allocate number * sizeof(double) bytes of memory
- cdef double *my_array = malloc(number * sizeof(double))
- if not my_array:
- raise MemoryError()
-
- try:
- ran = random.normalvariate
- for i in range(number):
- my_array[i] = ran(0,1)
-
- return [ my_array[i] for i in range(number) ]
- finally:
- # return the previously allocated memory to the system
- free(my_array)
+.. literalinclude:: ../../examples/tutorial/memory_allocation/malloc.pyx
+ :linenos:
Note that the C-API functions for allocating memory on the Python heap
are generally preferred over the low-level C functions above as the
@@ -79,28 +62,6 @@
If a chunk of memory needs a larger lifetime than can be managed by a
``try..finally`` block, another helpful idiom is to tie its lifetime
to a Python object to leverage the Python runtime's memory management,
-e.g.::
-
- cdef class SomeMemory:
-
- cdef double* data
-
- def __cinit__(self, size_t number):
- # allocate some memory (uninitialised, may contain arbitrary data)
- self.data = PyMem_Malloc(number * sizeof(double))
- if not self.data:
- raise MemoryError()
-
- def resize(self, size_t new_number):
- # Allocates new_number * sizeof(double) bytes,
- # preserving the current content and making a best-effort to
- # re-use the original data location.
- mem = PyMem_Realloc(self.data, new_number * sizeof(double))
- if not mem:
- raise MemoryError()
- # Only overwrite the pointer if the memory was really reallocated.
- # On error (mem is NULL), the originally memory has not been freed.
- self.data = mem
+e.g.:
- def __dealloc__(self):
- PyMem_Free(self.data) # no-op if self.data is NULL
+.. literalinclude:: ../../examples/tutorial/memory_allocation/some_memory.pyx
diff -Nru cython-0.26.1/docs/src/tutorial/numpy.rst cython-0.29.14/docs/src/tutorial/numpy.rst
--- cython-0.26.1/docs/src/tutorial/numpy.rst 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/docs/src/tutorial/numpy.rst 2018-11-24 09:20:06.000000000 +0000
@@ -1,7 +1,15 @@
+.. _working-numpy:
+
=======================
Working with NumPy
=======================
+.. NOTE:: Cython 0.16 introduced typed memoryviews as a successor to the NumPy
+ integration described here. They are easier to use than the buffer syntax
+ below, have less overhead, and can be passed around without requiring the GIL.
+ They should be preferred to the syntax presented in this page.
+ See :ref:`Cython for NumPy users `.
+
You can use NumPy from Cython exactly the same as in regular Python, but by
doing so you are losing potentially high speedups because Cython has support
for fast access to NumPy arrays. Let's see how this works with a simple
@@ -13,52 +21,10 @@
:file:`convolve_py.py` for the Python version and :file:`convolve1.pyx` for
the Cython version -- Cython uses ".pyx" as its file suffix.
-.. code-block:: python
-
- from __future__ import division
- import numpy as np
- def naive_convolve(f, g):
- # f is an image and is indexed by (v, w)
- # g is a filter kernel and is indexed by (s, t),
- # it needs odd dimensions
- # h is the output image and is indexed by (x, y),
- # it is not cropped
- if g.shape[0] % 2 != 1 or g.shape[1] % 2 != 1:
- raise ValueError("Only odd dimensions on filter supported")
- # smid and tmid are number of pixels between the center pixel
- # and the edge, ie for a 5x5 filter they will be 2.
- #
- # The output size is calculated by adding smid, tmid to each
- # side of the dimensions of the input image.
- vmax = f.shape[0]
- wmax = f.shape[1]
- smax = g.shape[0]
- tmax = g.shape[1]
- smid = smax // 2
- tmid = tmax // 2
- xmax = vmax + 2*smid
- ymax = wmax + 2*tmid
- # Allocate result image.
- h = np.zeros([xmax, ymax], dtype=f.dtype)
- # Do convolution
- for x in range(xmax):
- for y in range(ymax):
- # Calculate pixel value for h at (x,y). Sum one component
- # for each pixel (s, t) of the filter g.
- s_from = max(smid - x, -smid)
- s_to = min((xmax - x) - smid, smid + 1)
- t_from = max(tmid - y, -tmid)
- t_to = min((ymax - y) - tmid, tmid + 1)
- value = 0
- for s in range(s_from, s_to):
- for t in range(t_from, t_to):
- v = x - smid + s
- w = y - tmid + t
- value += g[smid - s, tmid - t] * f[v, w]
- h[x, y] = value
- return h
+.. literalinclude:: ../../examples/tutorial/numpy/convolve_py.py
-This should be compiled to produce :file:`yourmod.so` (for Linux systems). We
+This should be compiled to produce :file:`yourmod.so` (for Linux systems, on Windows
+systems, it will be :file:`yourmod.pyd`). We
run a Python session to test both the Python version (imported from
``.py``-file) and the compiled Cython module.
@@ -97,77 +63,9 @@
=============
To add types we use custom Cython syntax, so we are now breaking Python source
-compatibility. Consider this code (*read the comments!*) ::
+compatibility. Consider this code (*read the comments!*) :
- from __future__ import division
- import numpy as np
- # "cimport" is used to import special compile-time information
- # about the numpy module (this is stored in a file numpy.pxd which is
- # currently part of the Cython distribution).
- cimport numpy as np
- # We now need to fix a datatype for our arrays. I've used the variable
- # DTYPE for this, which is assigned to the usual NumPy runtime
- # type info object.
- DTYPE = np.int
- # "ctypedef" assigns a corresponding compile-time type to DTYPE_t. For
- # every type in the numpy module there's a corresponding compile-time
- # type with a _t-suffix.
- ctypedef np.int_t DTYPE_t
- # "def" can type its arguments but not have a return type. The type of the
- # arguments for a "def" function is checked at run-time when entering the
- # function.
- #
- # The arrays f, g and h is typed as "np.ndarray" instances. The only effect
- # this has is to a) insert checks that the function arguments really are
- # NumPy arrays, and b) make some attribute access like f.shape[0] much
- # more efficient. (In this example this doesn't matter though.)
- def naive_convolve(np.ndarray f, np.ndarray g):
- if g.shape[0] % 2 != 1 or g.shape[1] % 2 != 1:
- raise ValueError("Only odd dimensions on filter supported")
- assert f.dtype == DTYPE and g.dtype == DTYPE
- # The "cdef" keyword is also used within functions to type variables. It
- # can only be used at the top indentation level (there are non-trivial
- # problems with allowing them in other places, though we'd love to see
- # good and thought out proposals for it).
- #
- # For the indices, the "int" type is used. This corresponds to a C int,
- # other C types (like "unsigned int") could have been used instead.
- # Purists could use "Py_ssize_t" which is the proper Python type for
- # array indices.
- cdef int vmax = f.shape[0]
- cdef int wmax = f.shape[1]
- cdef int smax = g.shape[0]
- cdef int tmax = g.shape[1]
- cdef int smid = smax // 2
- cdef int tmid = tmax // 2
- cdef int xmax = vmax + 2*smid
- cdef int ymax = wmax + 2*tmid
- cdef np.ndarray h = np.zeros([xmax, ymax], dtype=DTYPE)
- cdef int x, y, s, t, v, w
- # It is very important to type ALL your variables. You do not get any
- # warnings if not, only much slower code (they are implicitly typed as
- # Python objects).
- cdef int s_from, s_to, t_from, t_to
- # For the value variable, we want to use the same data type as is
- # stored in the array, so we use "DTYPE_t" as defined above.
- # NB! An important side-effect of this is that if "value" overflows its
- # datatype size, it will simply wrap around like in C, rather than raise
- # an error like in Python.
- cdef DTYPE_t value
- for x in range(xmax):
- for y in range(ymax):
- s_from = max(smid - x, -smid)
- s_to = min((xmax - x) - smid, smid + 1)
- t_from = max(tmid - y, -tmid)
- t_to = min((ymax - y) - tmid, tmid + 1)
- value = 0
- for s in range(s_from, s_to):
- for t in range(t_from, t_to):
- v = x - smid + s
- w = y - tmid + t
- value += g[smid - s, tmid - t] * f[v, w]
- h[x, y] = value
- return h
+.. literalinclude:: ../../examples/tutorial/numpy/convolve2.pyx
After building this and continuing my (very informal) benchmarks, I get:
diff -Nru cython-0.26.1/docs/src/tutorial/profiling_tutorial.rst cython-0.29.14/docs/src/tutorial/profiling_tutorial.rst
--- cython-0.26.1/docs/src/tutorial/profiling_tutorial.rst 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/docs/src/tutorial/profiling_tutorial.rst 2018-11-24 09:20:06.000000000 +0000
@@ -44,14 +44,9 @@
functions that you rather do not want to see in your profile - either because
you plan to inline them anyway or because you are sure that you can't make them
any faster - you can use a special decorator to disable profiling for one
-function only::
-
- cimport cython
-
- @cython.profile(False)
- def my_often_called_function():
- pass
+function only (regardless of whether it is globally enabled or not):
+.. literalinclude:: ../../examples/tutorial/profiling_tutorial/often_called.pyx
Enabling line tracing
---------------------
@@ -80,7 +75,7 @@
--------------------------
Since Cython 0.23, line tracing (see above) also enables support for coverage
-reporting with the `coverage.py `_ tool.
+reporting with the `coverage.py `_ tool.
To make the coverage analysis understand Cython modules, you also need to enable
Cython's coverage plugin in your ``.coveragerc`` file as follows:
@@ -116,7 +111,7 @@
As a toy example, we would like to evaluate the summation of the reciprocals of
squares up to a certain integer :math:`n` for evaluating :math:`\pi`. The
relation we want to use has been proven by Euler in 1735 and is known as the
-`Basel problem `_.
+`Basel problem `_.
.. math::
@@ -125,20 +120,9 @@
\frac{1}{2^2} + \dots + \frac{1}{k^2} \big) \approx
6 \big( \frac{1}{1^2} + \frac{1}{2^2} + \dots + \frac{1}{n^2} \big)
-A simple Python code for evaluating the truncated sum looks like this::
+A simple Python code for evaluating the truncated sum looks like this:
- #!/usr/bin/env python
- # encoding: utf-8
- # filename: calc_pi.py
-
- def recip_square(i):
- return 1./i**2
-
- def approx_pi(n=10000000):
- val = 0.
- for k in range(1,n+1):
- val += recip_square(k)
- return (6 * val)**.5
+.. literalinclude:: ../../examples/tutorial/profiling_tutorial/calc_pi.py
On my box, this needs approximately 4 seconds to run the function with the
default n. The higher we choose n, the better will be the approximation for
@@ -147,20 +131,9 @@
Never optimize without having profiled. Let me repeat this: **Never** optimize
without having profiled your code. Your thoughts about which part of your
code takes too much time are wrong. At least, mine are always wrong. So let's
-write a short script to profile our code::
-
- #!/usr/bin/env python
- # encoding: utf-8
- # filename: profile.py
-
- import pstats, cProfile
-
- import calc_pi
-
- cProfile.runctx("calc_pi.approx_pi()", globals(), locals(), "Profile.prof")
+write a short script to profile our code:
- s = pstats.Stats("Profile.prof")
- s.strip_dirs().sort_stats("time").print_stats()
+.. literalinclude:: ../../examples/tutorial/profiling_tutorial/profile.py
Running this on my box gives the following output:
@@ -182,7 +155,7 @@
This contains the information that the code runs in 6.2 CPU seconds. Note that
the code got slower by 2 seconds because it ran inside the cProfile module. The
table contains the real valuable information. You might want to check the
-Python `profiling documentation `_
+Python `profiling documentation `_
for the nitty gritty details. The most important columns here are totime (total
time spent in this function **not** counting functions that were called by this
function) and cumtime (total time spent in this function **also** counting the
@@ -194,45 +167,19 @@
We could optimize a lot in the pure Python version, but since we are interested
in Cython, let's move forward and bring this module to Cython. We would do this
-anyway at some time to get the loop run faster. Here is our first Cython version::
+anyway at some time to get the loop run faster. Here is our first Cython version:
- # encoding: utf-8
- # cython: profile=True
- # filename: calc_pi.pyx
-
- def recip_square(int i):
- return 1./i**2
-
- def approx_pi(int n=10000000):
- cdef double val = 0.
- cdef int k
- for k in xrange(1,n+1):
- val += recip_square(k)
- return (6 * val)**.5
+.. literalinclude:: ../../examples/tutorial/profiling_tutorial/calc_pi_2.pyx
-Note the second line: We have to tell Cython that profiling should be enabled.
+Note the first line: We have to tell Cython that profiling should be enabled.
This makes the Cython code slightly slower, but without this we would not get
meaningful output from the cProfile module. The rest of the code is mostly
unchanged, I only typed some variables which will likely speed things up a bit.
We also need to modify our profiling script to import the Cython module directly.
-Here is the complete version adding the import of the pyximport module::
-
- #!/usr/bin/env python
- # encoding: utf-8
- # filename: profile.py
-
- import pstats, cProfile
-
- import pyximport
- pyximport.install()
-
- import calc_pi
-
- cProfile.runctx("calc_pi.approx_pi()", globals(), locals(), "Profile.prof")
+Here is the complete version adding the import of the :ref:`Pyximport` module:
- s = pstats.Stats("Profile.prof")
- s.strip_dirs().sort_stats("time").print_stats()
+.. literalinclude:: ../../examples/tutorial/profiling_tutorial/profile_2.py
We only added two lines, the rest stays completely the same. Alternatively, we could also
manually compile our code into an extension; we wouldn't need to change the
@@ -261,21 +208,9 @@
also get rid of the power operator: it is turned into a pow(i,2) function call by
Cython, but we could instead just write i*i which could be faster. The
whole function is also a good candidate for inlining. Let's look at the
-necessary changes for these ideas::
+necessary changes for these ideas:
- # encoding: utf-8
- # cython: profile=True
- # filename: calc_pi.pyx
-
- cdef inline double recip_square(int i):
- return 1./(i*i)
-
- def approx_pi(int n=10000000):
- cdef double val = 0.
- cdef int k
- for k in xrange(1,n+1):
- val += recip_square(k)
- return (6 * val)**.5
+.. literalinclude:: ../../examples/tutorial/profiling_tutorial/calc_pi_3.pyx
Now running the profile script yields:
@@ -298,24 +233,9 @@
expected. And why is recip_square still in this table; it is supposed to be
inlined, isn't it? The reason for this is that Cython still generates profiling code
even if the function call is eliminated. Let's tell it to not
-profile recip_square any more; we couldn't get the function to be much faster anyway::
+profile recip_square any more; we couldn't get the function to be much faster anyway:
- # encoding: utf-8
- # cython: profile=True
- # filename: calc_pi.pyx
-
- cimport cython
-
- @cython.profile(False)
- cdef inline double recip_square(int i):
- return 1./(i*i)
-
- def approx_pi(int n=10000000):
- cdef double val = 0.
- cdef int k
- for k in xrange(1,n+1):
- val += recip_square(k)
- return (6 * val)**.5
+.. literalinclude:: ../../examples/tutorial/profiling_tutorial/calc_pi_4.pyx
Running this shows an interesting result:
diff -Nru cython-0.26.1/docs/src/tutorial/pure.rst cython-0.29.14/docs/src/tutorial/pure.rst
--- cython-0.26.1/docs/src/tutorial/pure.rst 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/docs/src/tutorial/pure.rst 2018-11-24 09:20:06.000000000 +0000
@@ -12,17 +12,22 @@
To go beyond that, Cython provides language constructs to add static typing
and cythonic functionalities to a Python module to make it run much faster
when compiled, while still allowing it to be interpreted.
-This is accomplished either via an augmenting :file:`.pxd` file, or
+This is accomplished via an augmenting ``.pxd`` file, via Python
+type annotations (following
+`PEP 484 `_ and
+`PEP 526 `_), and/or
via special functions and decorators available after importing the magic
-``cython`` module.
+``cython`` module. All three ways can be combined at need, although
+projects would commonly decide on a specific way to keep the static type
+information easy to manage.
Although it is not typically recommended over writing straight Cython code
in a :file:`.pyx` file, there are legitimate reasons to do this - easier
-testing, collaboration with pure Python developers, etc. In pure mode, you
-are more or less restricted to code that can be expressed (or at least
-emulated) in Python, plus static type declarations. Anything beyond that
-can only be done in .pyx files with extended language syntax, because it
-depends on features of the Cython compiler.
+testing and debugging, collaboration with pure Python developers, etc.
+In pure mode, you are more or less restricted to code that can be expressed
+(or at least emulated) in Python, plus static type declarations. Anything
+beyond that can only be done in .pyx files with extended language syntax,
+because it depends on features of the Cython compiler.
Augmenting .pxd
@@ -42,49 +47,17 @@
being compiled, it will be searched for :keyword:`cdef` classes and
:keyword:`cdef`/:keyword:`cpdef` functions and methods. The compiler will
then convert the corresponding classes/functions/methods in the :file:`.py`
-file to be of the declared type. Thus if one has a file :file:`A.py`::
+file to be of the declared type. Thus if one has a file :file:`A.py`:
- def myfunction(x, y=2):
- a = x-y
- return a + x * y
+.. literalinclude:: ../../examples/tutorial/pure/A.py
- def _helper(a):
- return a + 1
+and adds :file:`A.pxd`:
- class A:
- def __init__(self, b=0):
- self.a = 3
- self.b = b
+.. literalinclude:: ../../examples/tutorial/pure/A.pxd
- def foo(self, x):
- print x + _helper(1.0)
+then Cython will compile the :file:`A.py` as if it had been written as follows:
-and adds :file:`A.pxd`::
-
- cpdef int myfunction(int x, int y=*)
- cdef double _helper(double a)
-
- cdef class A:
- cdef public int a,b
- cpdef foo(self, double x)
-
-then Cython will compile the :file:`A.py` as if it had been written as follows::
-
- cpdef int myfunction(int x, int y=2):
- a = x-y
- return a + x * y
-
- cdef double _helper(double a):
- return a + 1
-
- cdef class A:
- cdef public int a,b
- def __init__(self, b=0):
- self.a = 3
- self.b = b
-
- cpdef foo(self, double x):
- print x + _helper(1.0)
+.. literalinclude:: ../../examples/tutorial/pure/A_equivalent.pyx
Notice how in order to provide the Python wrappers to the definitions
in the :file:`.pxd`, that is, to be accessible from Python,
@@ -141,12 +114,7 @@
* ``compiled`` is a special variable which is set to ``True`` when the compiler
runs, and ``False`` in the interpreter. Thus, the code
- ::
-
- if cython.compiled:
- print("Yep, I'm compiled.")
- else:
- print("Just a lowly interpreted script.")
+ .. literalinclude:: ../../examples/tutorial/pure/compiled_switch.py
will behave differently depending on whether or not the code is executed as a
compiled extension (:file:`.so`/:file:`.pyd`) module or a plain :file:`.py`
@@ -159,57 +127,51 @@
* ``cython.declare`` declares a typed variable in the current scope, which can be
used in place of the :samp:`cdef type var [= value]` construct. This has two forms,
the first as an assignment (useful as it creates a declaration in interpreted
- mode as well)::
-
- x = cython.declare(cython.int) # cdef int x
- y = cython.declare(cython.double, 0.57721) # cdef double y = 0.57721
-
- and the second mode as a simple function call::
+ mode as well):
- cython.declare(x=cython.int, y=cython.double) # cdef int x; cdef double y
+ .. literalinclude:: ../../examples/tutorial/pure/cython_declare.py
- It can also be used to type class constructors::
+ and the second mode as a simple function call:
- class A:
- cython.declare(a=cython.int, b=cython.int)
- def __init__(self, b=0):
- self.a = 3
- self.b = b
+ .. literalinclude:: ../../examples/tutorial/pure/cython_declare2.py
- And even to define extension type private, readonly and public attributes::
+ It can also be used to define extension type private, readonly and public attributes:
- @cython.cclass
- class A:
- cython.declare(a=cython.int, b=cython.int)
- c = cython.declare(cython.int, visibility='public')
- d = cython.declare(cython.int, 5) # private by default.
- e = cython.declare(cython.int, 5, visibility='readonly')
+ .. literalinclude:: ../../examples/tutorial/pure/cclass.py
* ``@cython.locals`` is a decorator that is used to specify the types of local
- variables in the function body (including the arguments)::
+ variables in the function body (including the arguments):
- @cython.locals(a=cython.double, b=cython.double, n=cython.p_double)
- def foo(a, b, x, y):
- n = a*b
- ...
+ .. literalinclude:: ../../examples/tutorial/pure/locals.py
* ``@cython.returns()`` specifies the function's return type.
-* Starting with Cython 0.21, Python signature annotations can be used to
- declare argument types. Cython recognises three ways to do this, as
- shown in the following example. Note that it currently needs to be
- enabled explicitly with the directive ``annotation_typing=True``.
- This might change in a later version.
+* ``@cython.exceptval(value=None, *, check=False)`` specifies the function's exception
+ return value and exception check semantics as follows::
- ::
+ @exceptval(-1) # cdef int func() except -1:
+ @exceptval(-1, check=False) # cdef int func() except -1:
+ @exceptval(check=True) # cdef int func() except *:
+ @exceptval(-1, check=True) # cdef int func() except? -1:
+
+* Python annotations can be used to declare argument types, as shown in the
+ following example. To avoid conflicts with other kinds of annotation
+ usages, this can be disabled with the directive ``annotation_typing=False``.
+
+ .. literalinclude:: ../../examples/tutorial/pure/annotations.py
- # cython: annotation_typing=True
+ This can be combined with the ``@cython.exceptval()`` decorator for non-Python
+ return types:
- def func(plain_python_type: dict,
- named_python_type: 'dict',
- explicit_python_type: {'type': dict},
- explicit_c_type: {'ctype': 'int'}):
- ...
+ .. literalinclude:: ../../examples/tutorial/pure/exceptval.py
+
+ Since version 0.27, Cython also supports the variable annotations defined
+ in `PEP 526 `_. This allows to
+ declare types of variables in a Python 3.6 compatible way as follows:
+
+ .. literalinclude:: ../../examples/tutorial/pure/pep_526.py
+
+ There is currently no way to express the visibility of object attributes.
C types
@@ -250,6 +212,10 @@
* ``@cython.inline`` is the equivalent of the C ``inline`` modifier.
+* ``@cython.final`` terminates the inheritance chain by preventing a type from
+ being used as a base class, or a method from being overridden in subtypes.
+ This enables certain optimisations such as inlined method calls.
+
Here is an example of a :keyword:`cdef` function::
@cython.cfunc
@@ -273,8 +239,8 @@
::
cython.declare(n=cython.longlong)
- print cython.sizeof(cython.longlong)
- print cython.sizeof(n)
+ print(cython.sizeof(cython.longlong))
+ print(cython.sizeof(n))
* ``struct`` can be used to create struct types.::
@@ -312,20 +278,13 @@
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The special `cython` module can also be imported and used within the augmenting
-:file:`.pxd` file. For example, the following Python file :file:`dostuff.py`::
-
- def dostuff(n):
- t = 0
- for i in range(n):
- t += i
- return t
+:file:`.pxd` file. For example, the following Python file :file:`dostuff.py`:
-can be augmented with the following :file:`.pxd` file :file:`dostuff.pxd`::
+.. literalinclude:: ../../examples/tutorial/pure/dostuff.py
- import cython
+can be augmented with the following :file:`.pxd` file :file:`dostuff.pxd`:
- @cython.locals(t = cython.int, i = cython.int)
- cpdef int dostuff(int n)
+.. literalinclude:: ../../examples/tutorial/pure/dostuff.pxd
The :func:`cython.declare()` function can be used to specify types for global
variables in the augmenting :file:`.pxd` file.
@@ -340,25 +299,11 @@
Normally, it isn't possible to call C functions in pure Python mode as there
is no general way to support it in normal (uncompiled) Python. However, in
cases where an equivalent Python function exists, this can be achieved by
-combining C function coercion with a conditional import as follows::
-
- # in mymodule.pxd:
+combining C function coercion with a conditional import as follows:
- # declare a C function as "cpdef" to export it to the module
- cdef extern from "math.h":
- cpdef double sin(double x)
-
-
- # in mymodule.py:
+.. literalinclude:: ../../examples/tutorial/pure/mymodule.pxd
- import cython
-
- # override with Python import if not in compiled code
- if not cython.compiled:
- from math import sin
-
- # calls sin() from math.h when compiled with Cython and math.sin() in Python
- print(sin(0))
+.. literalinclude:: ../../examples/tutorial/pure/mymodule.py
Note that the "sin" function will show up in the module namespace of "mymodule"
here (i.e. there will be a ``mymodule.sin()`` function). You can mark it as an
@@ -375,24 +320,11 @@
Using C arrays for fixed size lists
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Since Cython 0.22, C arrays can automatically coerce to Python lists or tuples.
+C arrays can automatically coerce to Python lists or tuples.
This can be exploited to replace fixed size Python lists in Python code by C
-arrays when compiled. An example::
-
- import cython
+arrays when compiled. An example:
- @cython.locals(counts=cython.int[10], digit=cython.int)
- def count_digits(digits):
- """
- >>> digits = '01112222333334445667788899'
- >>> count_digits(map(int, digits))
- [1, 3, 4, 5, 3, 1, 2, 2, 3, 2]
- """
- counts = [0] * 10
- for digit in digits:
- assert 0 <= digit <= 9
- counts[digit] += 1
- return counts
+.. literalinclude:: ../../examples/tutorial/pure/c_arrays.py
In normal Python, this will use a Python list to collect the counts, whereas
Cython will generate C code that uses a C array of C ints.
diff -Nru cython-0.26.1/docs/src/tutorial/pxd_files.rst cython-0.29.14/docs/src/tutorial/pxd_files.rst
--- cython-0.26.1/docs/src/tutorial/pxd_files.rst 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/src/tutorial/pxd_files.rst 2018-11-24 09:20:06.000000000 +0000
@@ -1,3 +1,5 @@
+.. _pxd_files:
+
pxd files
=========
Binary files /tmp/tmp0lrW9P/aTeTJbw7H9/cython-0.26.1/docs/src/tutorial/python_division.png and /tmp/tmp0lrW9P/hdCxpT7ujz/cython-0.29.14/docs/src/tutorial/python_division.png differ
diff -Nru cython-0.26.1/docs/src/tutorial/queue_example/cqueue.pxd cython-0.29.14/docs/src/tutorial/queue_example/cqueue.pxd
--- cython-0.26.1/docs/src/tutorial/queue_example/cqueue.pxd 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/src/tutorial/queue_example/cqueue.pxd 1970-01-01 00:00:00.000000000 +0000
@@ -1,17 +0,0 @@
-cdef extern from "libcalg/queue.h":
- ctypedef struct Queue:
- pass
- ctypedef void* QueueValue
-
- Queue* queue_new()
- void queue_free(Queue* queue)
-
- int queue_push_head(Queue* queue, QueueValue data)
- QueueValue queue_pop_head(Queue* queue)
- QueueValue queue_peek_head(Queue* queue)
-
- int queue_push_tail(Queue* queue, QueueValue data)
- QueueValue queue_pop_tail(Queue* queue)
- QueueValue queue_peek_tail(Queue* queue)
-
- int queue_is_empty(Queue* queue)
diff -Nru cython-0.26.1/docs/src/tutorial/queue_example/queue.pyx cython-0.29.14/docs/src/tutorial/queue_example/queue.pyx
--- cython-0.26.1/docs/src/tutorial/queue_example/queue.pyx 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/src/tutorial/queue_example/queue.pyx 1970-01-01 00:00:00.000000000 +0000
@@ -1,96 +0,0 @@
-cimport cqueue
-
-cdef class Queue:
-
- cdef cqueue.Queue* _c_queue
-
- def __cinit__(self):
- self._c_queue = cqueue.queue_new()
- if self._c_queue is NULL:
- raise MemoryError()
-
- def __dealloc__(self):
- if self._c_queue is not NULL:
- cqueue.queue_free(self._c_queue)
-
- cpdef int append(self, int value) except -1:
- if not cqueue.queue_push_tail(self._c_queue, value):
- raise MemoryError()
- return 0
-
- cdef int extend(self, int* values, Py_ssize_t count) except -1:
- cdef Py_ssize_t i
- for i in range(count):
- if not cqueue.queue_push_tail(self._c_queue, values[i]):
- raise MemoryError()
- return 0
-
- cpdef int peek(self) except? 0:
- cdef int value = cqueue.queue_peek_head(self._c_queue)
- if value == 0:
- # this may mean that the queue is empty, or that it
- # happens to contain a 0 value
- if cqueue.queue_is_empty(self._c_queue):
- raise IndexError("Queue is empty")
- return value
-
- cpdef int pop(self) except? 0:
- cdef int value = cqueue.queue_pop_head(self._c_queue)
- if value == 0:
- # this may mean that the queue is empty, or that it
- # happens to contain a 0 value
- if cqueue.queue_is_empty(self._c_queue):
- raise IndexError("Queue is empty")
- return value
-
- def __bool__(self): # same as __nonzero__ in Python 2.x
- return not cqueue.queue_is_empty(self._c_queue)
-
-DEF repeat_count=10000
-
-def test_cy():
- cdef int i
- cdef Queue q = Queue()
- for i in range(repeat_count):
- q.append(i)
- for i in range(repeat_count):
- q.peek()
- while q:
- q.pop()
-
-def test_py():
- cdef int i
- q = Queue()
- for i in range(repeat_count):
- q.append(i)
- for i in range(repeat_count):
- q.peek()
- while q:
- q.pop()
-
-from collections import deque
-
-def test_deque():
- cdef int i
- q = deque()
- for i in range(repeat_count):
- q.appendleft(i)
- for i in range(repeat_count):
- q[-1]
- while q:
- q.pop()
-
-repeat = range(repeat_count)
-
-def test_py_exec():
- q = Queue()
- d = dict(q=q, repeat=repeat)
-
- exec u"""\
-for i in repeat:
- q.append(9)
-for i in repeat:
- q.peek()
-while q:
- q.pop()
-""" in d
diff -Nru cython-0.26.1/docs/src/tutorial/readings.rst cython-0.29.14/docs/src/tutorial/readings.rst
--- cython-0.26.1/docs/src/tutorial/readings.rst 2015-06-22 12:53:11.000000000 +0000
+++ cython-0.29.14/docs/src/tutorial/readings.rst 2018-11-24 09:20:06.000000000 +0000
@@ -4,7 +4,7 @@
The main documentation is located at http://docs.cython.org/. Some
recent features might not have documentation written yet, in such
cases some notes can usually be found in the form of a Cython
-Enhancement Proposal (CEP) on http://wiki.cython.org/enhancements.
+Enhancement Proposal (CEP) on https://github.com/cython/cython/wiki/enhancements.
[Seljebotn09]_ contains more information about Cython and NumPy
arrays. If you intend to use Cython code in a multi-threaded setting,
@@ -20,7 +20,7 @@
clear bug, to ask for guidance if you have time to spare to develop
Cython, or if you have suggestions for future development.
-.. [DevList] Cython developer mailing list: http://mail.python.org/mailman/listinfo/cython-devel
+.. [DevList] Cython developer mailing list: https://mail.python.org/mailman/listinfo/cython-devel
.. [Seljebotn09] D. S. Seljebotn, Fast numerical computations with Cython,
Proceedings of the 8th Python in Science Conference, 2009.
-.. [UserList] Cython users mailing list: http://groups.google.com/group/cython-users
+.. [UserList] Cython users mailing list: https://groups.google.com/group/cython-users
diff -Nru cython-0.26.1/docs/src/tutorial/related_work.rst cython-0.29.14/docs/src/tutorial/related_work.rst
--- cython-0.26.1/docs/src/tutorial/related_work.rst 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/docs/src/tutorial/related_work.rst 2018-11-24 09:20:06.000000000 +0000
@@ -39,12 +39,12 @@
it does not support natively, and supports very few of the standard
Python modules.
-.. [ctypes] http://docs.python.org/library/ctypes.html.
+.. [ctypes] https://docs.python.org/library/ctypes.html.
.. there's also the original ctypes home page: http://python.net/crew/theller/ctypes/
.. [Pyrex] G. Ewing, Pyrex: C-Extensions for Python,
http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/
.. [ShedSkin] M. Dufour, J. Coughlan, ShedSkin,
- http://code.google.com/p/shedskin/
+ https://github.com/shedskin/shedskin
.. [SWIG] David M. Beazley et al.,
SWIG: An Easy to Use Tool for Integrating Scripting Languages with C and C++,
http://www.swig.org.
diff -Nru cython-0.26.1/docs/src/tutorial/strings.rst cython-0.29.14/docs/src/tutorial/strings.rst
--- cython-0.26.1/docs/src/tutorial/strings.rst 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/docs/src/tutorial/strings.rst 2018-11-24 09:20:06.000000000 +0000
@@ -107,11 +107,22 @@
Passing byte strings
--------------------
+we have dummy C functions declared in
+a file called :file:`c_func.pyx` that we are going to reuse throughout this tutorial:
+
+.. literalinclude:: ../../examples/tutorial/string/c_func.pyx
+
+We make a corresponding :file:`c_func.pxd` to be able to cimport those functions:
+
+.. literalinclude:: ../../examples/tutorial/string/c_func.pxd
+
It is very easy to pass byte strings between C code and Python.
When receiving a byte string from a C library, you can let Cython
convert it into a Python byte string by simply assigning it to a
Python variable::
+ from c_func cimport c_call_returning_a_c_string
+
cdef char* c_string = c_call_returning_a_c_string()
cdef bytes py_string = c_string
@@ -133,15 +144,9 @@
terminating null byte. In many cases, the user code will know the
length already, e.g. because a C function returned it. In this case,
it is much more efficient to tell Cython the exact number of bytes by
-slicing the C string::
-
- cdef char* c_string = NULL
- cdef Py_ssize_t length = 0
+slicing the C string. Here is an example:
- # get pointer and length from a C function
- get_a_c_string(&c_string, &length)
-
- py_bytes_string = c_string[:length]
+.. literalinclude:: ../../examples/tutorial/string/slicing_c_string.pyx
Here, no additional byte counting is required and ``length`` bytes from
the ``c_string`` will be copied into the Python bytes object, including
@@ -152,20 +157,14 @@
Note that the creation of the Python bytes string can fail with an
exception, e.g. due to insufficient memory. If you need to
:c:func:`free()` the string after the conversion, you should wrap
-the assignment in a try-finally construct::
+the assignment in a try-finally construct:
- from libc.stdlib cimport free
- cdef bytes py_string
- cdef char* c_string = c_call_creating_a_new_c_string()
- try:
- py_string = c_string
- finally:
- free(c_string)
+.. literalinclude:: ../../examples/tutorial/string/try_finally.pyx
To convert the byte string back into a C :c:type:`char*`, use the
opposite assignment::
- cdef char* other_c_string = py_string
+ cdef char* other_c_string = py_string # other_c_string is a 0-terminated string.
This is a very fast operation after which ``other_c_string`` points to
the byte string buffer of the Python string itself. It is tied to the
@@ -203,13 +202,7 @@
Depending on how (and where) the data is being processed, it may be a
good idea to instead receive a 1-dimensional memory view, e.g.
-::
-
- def process_byte_data(unsigned char[:] data):
- length = data.shape[0]
- first_byte = data[0]
- slice_view = data[1:-1]
- ...
+.. literalinclude:: ../../examples/tutorial/string/arg_memview.pyx
Cython's memory views are described in more detail in
:doc:`../userguide/memoryviews`, but the above example already shows
@@ -223,15 +216,9 @@
data, they would otherwise keep the entire original buffer alive. The
general idea here is to be liberal with input by accepting any kind of
byte buffer, but strict with output by returning a simple, well adapted
-object. This can simply be done as follows::
+object. This can simply be done as follows:
- def process_byte_data(unsigned char[:] data):
- # ... process the data
- if return_all:
- return bytes(data)
- else:
- # example for returning a slice
- return bytes(data[5:35])
+.. literalinclude:: ../../examples/tutorial/string/return_memview.pyx
If the byte input is actually encoded text, and the further processing
should happen at the Unicode level, then the right thing to do is to
@@ -243,43 +230,19 @@
process later.
This kind of input normalisation function will commonly look similar to
-the following::
+the following:
- from cpython.version cimport PY_MAJOR_VERSION
+.. literalinclude:: ../../examples/tutorial/string/to_unicode.pyx
- cdef unicode _ustring(s):
- if type(s) is unicode:
- # fast path for most common case(s)
- return s
- elif PY_MAJOR_VERSION < 3 and isinstance(s, bytes):
- # only accept byte strings in Python 2.x, not in Py3
- return (s).decode('ascii')
- elif isinstance(s, unicode):
- # an evil cast to might work here in some(!) cases,
- # depending on what the further processing does. to be safe,
- # we can always create a copy instead
- return unicode(s)
- else:
- raise TypeError(...)
-
-And should then be used like this::
-
- def api_func(s):
- text = _ustring(s)
- ...
+And should then be used like this:
+
+.. literalinclude:: ../../examples/tutorial/string/api_func.pyx
Similarly, if the further processing happens at the byte level, but Unicode
string input should be accepted, then the following might work, if you are
-using memory views::
-
- # define a global name for whatever char type is used in the module
- ctypedef unsigned char char_type
+using memory views:
- cdef char_type[:] _chars(s):
- if isinstance(s, unicode):
- # encode to the specific encoding used inside of the module
- s = (s).encode('utf8')
- return s
+.. literalinclude:: ../../examples/tutorial/string/to_char.pyx
In this case, you might want to additionally ensure that byte string
input really uses the correct encoding, e.g. if you require pure ASCII
@@ -295,52 +258,13 @@
that they will not modify a string, or to require that users must
not modify a string they return, for example:
-.. code-block:: c
+.. literalinclude:: ../../examples/tutorial/string/someheader.h
- typedef const char specialChar;
- int process_string(const char* s);
- const unsigned char* look_up_cached_string(const unsigned char* key);
-
-Since version 0.18, Cython has support for the ``const`` modifier in
+Cython has support for the ``const`` modifier in
the language, so you can declare the above functions straight away as
-follows::
+follows:
- cdef extern from "someheader.h":
- ctypedef const char specialChar
- int process_string(const char* s)
- const unsigned char* look_up_cached_string(const unsigned char* key)
-
-Previous versions required users to make the necessary declarations
-at a textual level. If you need to support older Cython versions,
-you can use the following approach.
-
-In general, for arguments of external C functions, the ``const``
-modifier does not matter and can be left out in the Cython
-declaration (e.g. in a .pxd file). The C compiler will still do
-the right thing, even if you declare this to Cython::
-
- cdef extern from "someheader.h":
- int process_string(char* s) # note: looses API information!
-
-However, in most other situations, such as for return values and
-variables that use specifically typedef-ed API types, it does matter
-and the C compiler will emit at least a warning if used incorrectly.
-To help with this, you can use the type definitions in the
-``libc.string`` module, e.g.::
-
- from libc.string cimport const_char, const_uchar
-
- cdef extern from "someheader.h":
- ctypedef const_char specialChar
- int process_string(const_char* s)
- const_uchar* look_up_cached_string(const_uchar* key)
-
-Note: even if the API only uses ``const`` for function arguments,
-it is still preferable to properly declare them using these
-provided :c:type:`const_char` types in order to simplify adaptations.
-In Cython 0.18, these standard declarations have been changed to
-use the correct ``const`` modifier, so your code will automatically
-benefit from the new ``const`` support if it uses them.
+.. literalinclude:: ../../examples/tutorial/string/const.pyx
Decoding bytes to text
@@ -358,20 +282,13 @@
ustring = byte_string.decode('UTF-8')
Cython allows you to do the same for a C string, as long as it
-contains no null bytes::
-
- cdef char* some_c_string = c_call_returning_a_c_string()
- ustring = some_c_string.decode('UTF-8')
+contains no null bytes:
-And, more efficiently, for strings where the length is known::
+.. literalinclude:: ../../examples/tutorial/string/naive_decode.pyx
- cdef char* c_string = NULL
- cdef Py_ssize_t length = 0
+And, more efficiently, for strings where the length is known:
- # get pointer and length from a C function
- get_a_c_string(&c_string, &length)
-
- ustring = c_string[:length].decode('UTF-8')
+.. literalinclude:: ../../examples/tutorial/string/decode.pyx
The same should be used when the string contains null bytes, e.g. when
it uses an encoding like UCS-4, where each character is encoded in four
@@ -379,7 +296,7 @@
Again, no bounds checking is done if slice indices are provided, so
incorrect indices lead to data corruption and crashes. However, using
-negative indices is possible since Cython 0.17 and will inject a call
+negative indices is possible and will inject a call
to :c:func:`strlen()` in order to determine the string length.
Obviously, this only works for 0-terminated strings without internal
null bytes. Text encoded in UTF-8 or one of the ISO-8859 encodings is
@@ -389,23 +306,9 @@
It is common practice to wrap string conversions (and non-trivial type
conversions in general) in dedicated functions, as this needs to be
done in exactly the same way whenever receiving text from C. This
-could look as follows::
-
- from libc.stdlib cimport free
-
- cdef unicode tounicode(char* s):
- return s.decode('UTF-8', 'strict')
+could look as follows:
- cdef unicode tounicode_with_length(
- char* s, size_t length):
- return s[:length].decode('UTF-8', 'strict')
-
- cdef unicode tounicode_with_length_and_free(
- char* s, size_t length):
- try:
- return s[:length].decode('UTF-8', 'strict')
- finally:
- free(s)
+.. literalinclude:: ../../examples/tutorial/string/utf_eight.pyx
Most likely, you will prefer shorter function names in your code based
on the kind of string being handled. Different types of content often
@@ -442,18 +345,9 @@
When wrapping a C++ library, strings will usually come in the form of
the :c:type:`std::string` class. As with C strings, Python byte strings
-automatically coerce from and to C++ strings::
+automatically coerce from and to C++ strings:
- # distutils: language = c++
-
- from libcpp.string cimport string
-
- cdef string s = py_bytes_object
- try:
- s.append('abc')
- py_bytes_object = s
- finally:
- del s
+.. literalinclude:: ../../examples/tutorial/string/cpp_string.pyx
The memory management situation is different than in C because the
creation of a C++ string makes an independent copy of the string
@@ -469,12 +363,9 @@
and then copies its buffer into a new C++ string.
For the other direction, efficient decoding support is available
-in Cython 0.17 and later::
+in Cython 0.17 and later:
- cdef string s = string(b'abcdefg')
-
- ustring1 = s.decode('UTF-8')
- ustring2 = s[2:-2].decode('UTF-8')
+.. literalinclude:: ../../examples/tutorial/string/decode_cpp_string.pyx
For C++ strings, decoding slices will always take the proper length
of the string into account and apply Python slicing semantics (e.g.
@@ -496,54 +387,30 @@
objects can reduce the code overhead a little. In this case, you
can set the ``c_string_type`` directive in your module to :obj:`unicode`
and the ``c_string_encoding`` to the encoding that your C code uses,
-for example::
-
- # cython: c_string_type=unicode, c_string_encoding=utf8
+for example:
- cdef char* c_string = 'abcdefg'
-
- # implicit decoding:
- cdef object py_unicode_object = c_string
-
- # explicit conversion to Python bytes:
- py_bytes_object = c_string
+.. literalinclude:: ../../examples/tutorial/string/auto_conversion_1.pyx
The second use case is when all C strings that are being processed
only contain ASCII encodable characters (e.g. numbers) and you want
your code to use the native legacy string type in Python 2 for them,
instead of always using Unicode. In this case, you can set the
-string type to :obj:`str`::
-
- # cython: c_string_type=str, c_string_encoding=ascii
+string type to :obj:`str`:
- cdef char* c_string = 'abcdefg'
-
- # implicit decoding in Py3, bytes conversion in Py2:
- cdef object py_str_object = c_string
-
- # explicit conversion to Python bytes:
- py_bytes_object = c_string
-
- # explicit conversion to Python unicode:
- py_bytes_object = c_string
+.. literalinclude:: ../../examples/tutorial/string/auto_conversion_2.pyx
The other direction, i.e. automatic encoding to C strings, is only
-supported for the ASCII codec (and the "default encoding", which is
-runtime specific and may or may not be ASCII). This is because
-CPython handles the memory management in this case by keeping an
-encoded copy of the string alive together with the original unicode
-string. Otherwise, there would be no way to limit the lifetime of
-the encoded string in any sensible way, thus rendering any attempt to
-extract a C string pointer from it a dangerous endeavour. As long
-as you stick to the ASCII encoding for the ``c_string_encoding``
-directive, though, the following will work::
-
- # cython: c_string_type=unicode, c_string_encoding=ascii
-
- def func():
- ustring = u'abc'
- cdef char* s = ustring
- return s[0] # returns u'a'
+supported for ASCII and the "default encoding", which is usually UTF-8
+in Python 3 and usually ASCII in Python 2. CPython handles the memory
+management in this case by keeping an encoded copy of the string alive
+together with the original unicode string. Otherwise, there would be no
+way to limit the lifetime of the encoded string in any sensible way,
+thus rendering any attempt to extract a C string pointer from it a
+dangerous endeavour. The following safely converts a Unicode string to
+ASCII (change ``c_string_encoding`` to ``default`` to use the default
+encoding instead):
+
+.. literalinclude:: ../../examples/tutorial/string/auto_conversion_3.pyx
(This example uses a function context in order to safely control the
lifetime of the Unicode string. Global Python variables can be
@@ -600,7 +467,7 @@
the parser to read all unprefixed :obj:`str` literals in a source file as
unicode string literals, just like Python 3.
-.. _`CEP 108`: http://wiki.cython.org/enhancements/stringliterals
+.. _`CEP 108`: https://github.com/cython/cython/wiki/enhancements-stringliterals
Single bytes and characters
---------------------------
@@ -608,7 +475,7 @@
The Python C-API uses the normal C :c:type:`char` type to represent
a byte value, but it has two special integer types for a Unicode code
point value, i.e. a single Unicode character: :c:type:`Py_UNICODE`
-and :c:type:`Py_UCS4`. Since version 0.13, Cython supports the
+and :c:type:`Py_UCS4`. Cython supports the
first natively, support for :c:type:`Py_UCS4` is new in Cython 0.15.
:c:type:`Py_UNICODE` is either defined as an unsigned 2-byte or
4-byte integer, or as :c:type:`wchar_t`, depending on the platform.
@@ -745,30 +612,18 @@
Cython 0.13 supports efficient iteration over :c:type:`char*`,
bytes and unicode strings, as long as the loop variable is
appropriately typed. So the following will generate the expected
-C code::
+C code:
- cdef char* c_string = ...
+.. literalinclude:: ../../examples/tutorial/string/for_char.pyx
- cdef char c
- for c in c_string[:100]:
- if c == 'A': ...
+The same applies to bytes objects:
-The same applies to bytes objects::
-
- cdef bytes bytes_string = ...
-
- cdef char c
- for c in bytes_string:
- if c == 'A': ...
+.. literalinclude:: ../../examples/tutorial/string/for_bytes.pyx
For unicode objects, Cython will automatically infer the type of the
-loop variable as :c:type:`Py_UCS4`::
-
- cdef unicode ustring = ...
+loop variable as :c:type:`Py_UCS4`:
- # NOTE: no typing required for 'uchar' !
- for uchar in ustring:
- if uchar == u'A': ...
+.. literalinclude:: ../../examples/tutorial/string/for_unicode.pyx
The automatic type inference usually leads to much more efficient code
here. However, note that some unicode operations still require the
@@ -781,11 +636,9 @@
it.
There are also optimisations for ``in`` tests, so that the following
-code will run in plain C code, (actually using a switch statement)::
+code will run in plain C code, (actually using a switch statement):
- cdef Py_UCS4 uchar_val = get_a_unicode_character()
- if uchar_val in u'abcABCxY':
- ...
+.. literalinclude:: ../../examples/tutorial/string/if_char_in.pyx
Combined with the looping optimisation above, this can result in very
efficient character switching code, e.g. in unicode parsers.
diff -Nru cython-0.26.1/docs/src/userguide/buffer.rst cython-0.29.14/docs/src/userguide/buffer.rst
--- cython-0.26.1/docs/src/userguide/buffer.rst 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/docs/src/userguide/buffer.rst 2018-11-24 09:20:06.000000000 +0000
@@ -16,21 +16,7 @@
where the number of columns is fixed at construction time
but rows can be added dynamically.
-::
-
- # matrix.pyx
- from libcpp.vector cimport vector
-
- cdef class Matrix:
- cdef unsigned ncols
- cdef vector[float] v
-
- def __cinit__(self, unsigned ncols):
- self.ncols = ncols
-
- def add_row(self):
- """Adds a row, initially zero-filled."""
- self.v.extend(self.ncols)
+.. literalinclude:: ../../examples/userguide/buffer/matrix.pyx
There are no methods to do anything productive with the matrices' contents.
We could implement custom ``__getitem__``, ``__setitem__``, etc. for this,
@@ -41,51 +27,7 @@
``__getbuffer__`` and ``__releasebuffer__``,
which Cython handles specially.
-::
-
- from cpython cimport Py_buffer
- from libcpp.vector cimport vector
-
- cdef class Matrix:
- cdef Py_ssize_t ncols
- cdef Py_ssize_t shape[2]
- cdef Py_ssize_t strides[2]
- cdef vector[float] v
-
- def __cinit__(self, Py_ssize_t ncols):
- self.ncols = ncols
-
- def add_row(self):
- """Adds a row, initially zero-filled."""
- self.v.extend(self.ncols)
-
- def __getbuffer__(self, Py_buffer *buffer, int flags):
- cdef Py_ssize_t itemsize = sizeof(self.v[0])
-
- self.shape[0] = self.v.size() / self.ncols
- self.shape[1] = self.ncols
-
- # Stride 1 is the distance, in bytes, between two items in a row;
- # this is the distance between two adjacent items in the vector.
- # Stride 0 is the distance between the first elements of adjacent rows.
- self.strides[1] = ( &(self.v[1])
- - &(self.v[0]))
- self.strides[0] = self.ncols * self.strides[1]
-
- buffer.buf = &(self.v[0])
- buffer.format = 'f' # float
- buffer.internal = NULL # see References
- buffer.itemsize = itemsize
- buffer.len = self.v.size() * itemsize # product(shape) * itemsize
- buffer.ndim = 2
- buffer.obj = self
- buffer.readonly = 0
- buffer.shape = self.shape
- buffer.strides = self.strides
- buffer.suboffsets = NULL # for pointer arrays only
-
- def __releasebuffer__(self, Py_buffer *buffer):
- pass
+.. literalinclude:: ../../examples/userguide/buffer/matrix_with_buffer.pyx
The method ``Matrix.__getbuffer__`` fills a descriptor structure,
called a ``Py_buffer``, that is defined by the Python C-API.
@@ -133,29 +75,7 @@
We can add a reference count to each matrix,
and lock it for mutation whenever a view exists.
-::
-
- cdef class Matrix:
- # ...
- cdef int view_count
-
- def __cinit__(self, Py_ssize_t ncols):
- self.ncols = ncols
- self.view_count = 0
-
- def add_row(self):
- if self.view_count > 0:
- raise ValueError("can't add row while being viewed")
- self.v.resize(self.v.size() + self.ncols)
-
- def __getbuffer__(self, Py_buffer *buffer, int flags):
- # ... as before
-
- self.view_count += 1
-
- def __releasebuffer__(self, Py_buffer *buffer):
- self.view_count -= 1
-
+.. literalinclude:: ../../examples/userguide/buffer/view_count.pyx
Flags
-----
Binary files /tmp/tmp0lrW9P/aTeTJbw7H9/cython-0.26.1/docs/src/userguide/compute_typed_html.jpg and /tmp/tmp0lrW9P/hdCxpT7ujz/cython-0.29.14/docs/src/userguide/compute_typed_html.jpg differ
diff -Nru cython-0.26.1/docs/src/userguide/debugging.rst cython-0.29.14/docs/src/userguide/debugging.rst
--- cython-0.26.1/docs/src/userguide/debugging.rst 2016-12-10 15:41:15.000000000 +0000
+++ cython-0.29.14/docs/src/userguide/debugging.rst 2018-11-24 09:20:06.000000000 +0000
@@ -58,9 +58,11 @@
with an interpreter that is compiled with debugging symbols (i.e. configured
with ``--with-pydebug`` or compiled with the ``-g`` CFLAG). If your Python is
installed and managed by your package manager you probably need to install debug
-support separately, e.g. for ubuntu::
+support separately. If using NumPy then you also need to install numpy debugging, or you'll
+see an `import error for multiarray `_.
+E.G. for ubuntu::
- $ sudo apt-get install python-dbg
+ $ sudo apt-get install python-dbg python-numpy-dbg
$ python-dbg setup.py build_ext --inplace
Then you need to run your script with ``python-dbg`` also. Ensure that when
diff -Nru cython-0.26.1/docs/src/userguide/early_binding_for_speed.rst cython-0.29.14/docs/src/userguide/early_binding_for_speed.rst
--- cython-0.26.1/docs/src/userguide/early_binding_for_speed.rst 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/docs/src/userguide/early_binding_for_speed.rst 2019-02-27 12:23:19.000000000 +0000
@@ -22,22 +22,7 @@
For example, consider the following (silly) code example:
-.. sourcecode:: cython
-
- cdef class Rectangle:
- cdef int x0, y0
- cdef int x1, y1
- def __init__(self, int x0, int y0, int x1, int y1):
- self.x0 = x0; self.y0 = y0; self.x1 = x1; self.y1 = y1
- def area(self):
- area = (self.x1 - self.x0) * (self.y1 - self.y0)
- if area < 0:
- area = -area
- return area
-
- def rectArea(x0, y0, x1, y1):
- rect = Rectangle(x0, y0, x1, y1)
- return rect.area()
+.. literalinclude:: ../../examples/userguide/early_binding_for_speed/rectangle.pyx
In the :func:`rectArea` method, the call to :meth:`rect.area` and the
:meth:`.area` method contain a lot of Python overhead.
@@ -45,26 +30,7 @@
However, in Cython, it is possible to eliminate a lot of this overhead in cases
where calls occur within Cython code. For example:
-.. sourcecode:: cython
-
- cdef class Rectangle:
- cdef int x0, y0
- cdef int x1, y1
- def __init__(self, int x0, int y0, int x1, int y1):
- self.x0 = x0; self.y0 = y0; self.x1 = x1; self.y1 = y1
- cdef int _area(self):
- cdef int area
- area = (self.x1 - self.x0) * (self.y1 - self.y0)
- if area < 0:
- area = -area
- return area
- def area(self):
- return self._area()
-
- def rectArea(x0, y0, x1, y1):
- cdef Rectangle rect
- rect = Rectangle(x0, y0, x1, y1)
- return rect._area()
+.. literalinclude:: ../../examples/userguide/early_binding_for_speed/rectangle_cdef.pyx
Here, in the Rectangle extension class, we have defined two different area
calculation methods, the efficient :meth:`_area` C method, and the
@@ -80,29 +46,7 @@
can also be accessed from pure Python code at the cost of the Python access
overheads. Consider this code:
-.. sourcecode:: cython
-
- cdef class Rectangle:
- cdef int x0, y0
- cdef int x1, y1
- def __init__(self, int x0, int y0, int x1, int y1):
- self.x0 = x0; self.y0 = y0; self.x1 = x1; self.y1 = y1
- cpdef int area(self):
- cdef int area
- area = (self.x1 - self.x0) * (self.y1 - self.y0)
- if area < 0:
- area = -area
- return area
-
- def rectArea(x0, y0, x1, y1):
- cdef Rectangle rect
- rect = Rectangle(x0, y0, x1, y1)
- return rect.area()
-
-.. note::
-
- in earlier versions of Cython, the :keyword:`cpdef` keyword is
- ``rdef`` - but has the same effect).
+.. literalinclude:: ../../examples/userguide/early_binding_for_speed/rectangle_cpdef.pyx
Here, we just have a single area method, declared as :keyword:`cpdef` to make it
efficiently callable as a C function, but still accessible from pure Python
@@ -111,7 +55,7 @@
If within Cython code, we have a variable already 'early-bound' (ie, declared
explicitly as type Rectangle, (or cast to type Rectangle), then invoking its
area method will use the efficient C code path and skip the Python overhead.
-But if in Pyrex or regular Python code we have a regular object variable
+But if in Cython or regular Python code we have a regular object variable
storing a Rectangle object, then invoking the area method will require:
* an attribute lookup for the area method
diff -Nru cython-0.26.1/docs/src/userguide/extension_types.rst cython-0.29.14/docs/src/userguide/extension_types.rst
--- cython-0.26.1/docs/src/userguide/extension_types.rst 2017-08-12 14:06:59.000000000 +0000
+++ cython-0.29.14/docs/src/userguide/extension_types.rst 2018-11-24 09:20:06.000000000 +0000
@@ -12,19 +12,9 @@
As well as creating normal user-defined classes with the Python class
statement, Cython also lets you create new built-in Python types, known as
extension types. You define an extension type using the :keyword:`cdef` class
-statement. Here's an example::
+statement. Here's an example:
- cdef class Shrubbery:
-
- cdef int width, height
-
- def __init__(self, w, h):
- self.width = w
- self.height = h
-
- def describe(self):
- print "This shrubbery is", self.width, \
- "by", self.height, "cubits."
+.. literalinclude:: ../../examples/userguide/extension_types/shrubbery.pyx
As you can see, a Cython extension type definition looks a lot like a Python
class definition. Within it, you use the def statement to define methods that
@@ -39,14 +29,16 @@
.. _readonly:
-Attributes
-============
+Static Attributes
+=================
Attributes of an extension type are stored directly in the object's C struct.
The set of attributes is fixed at compile time; you can't add attributes to an
extension type instance at run time simply by assigning to them, as you could
-with a Python class instance. (You can subclass the extension type in Python
-and add attributes to instances of the subclass, however.)
+with a Python class instance. However, you can explicitly enable support
+for dynamically assigned attributes, or subclass the extension type with a normal
+Python class, which then supports arbitrary attribute assignments.
+See :ref:`dynamic_attributes`.
There are two ways that attributes of an extension type can be accessed: by
Python attribute lookup, or by direct access to the C struct from Cython code.
@@ -56,11 +48,9 @@
By default, extension type attributes are only accessible by direct access,
not Python access, which means that they are not accessible from Python code.
To make them accessible from Python code, you need to declare them as
-:keyword:`public` or :keyword:`readonly`. For example::
+:keyword:`public` or :keyword:`readonly`. For example:
- cdef class Shrubbery:
- cdef public int width, height
- cdef readonly float depth
+.. literalinclude:: ../../examples/userguide/extension_types/python_access.pyx
makes the width and height attributes readable and writable from Python code,
and the depth attribute readable but not writable.
@@ -76,6 +66,24 @@
Python access, not direct access. All the attributes of an extension type
are always readable and writable by C-level access.
+
+.. _dynamic_attributes:
+
+Dynamic Attributes
+==================
+
+It is not possible to add attributes to an extension type at runtime by default.
+You have two ways of avoiding this limitation, both add an overhead when
+a method is called from Python code. Especially when calling ``cpdef`` methods.
+
+The first approach is to create a Python subclass.:
+
+.. literalinclude:: ../../examples/userguide/extension_types/extendable_animal.pyx
+
+Declaring a ``__dict__`` attribute is the second way of enabling dynamic attributes.:
+
+.. literalinclude:: ../../examples/userguide/extension_types/dict_animal.pyx
+
Type declarations
===================
@@ -97,21 +105,23 @@
-- the code will compile, but an attribute error will be raised at run time.
The solution is to declare ``sh`` as being of type :class:`Shrubbery`, as
-follows::
+follows:
- cdef widen_shrubbery(Shrubbery sh, extra_width):
- sh.width = sh.width + extra_width
+.. literalinclude:: ../../examples/userguide/extension_types/widen_shrubbery.pyx
Now the Cython compiler knows that ``sh`` has a C attribute called
:attr:`width` and will generate code to access it directly and efficiently.
-The same consideration applies to local variables, for example,::
+The same consideration applies to local variables, for example:
- cdef Shrubbery another_shrubbery(Shrubbery sh1):
- cdef Shrubbery sh2
- sh2 = Shrubbery()
- sh2.width = sh1.width
- sh2.height = sh1.height
- return sh2
+.. literalinclude:: ../../examples/userguide/extension_types/shrubbery_2.pyx
+
+.. note::
+
+ We here ``cimport`` the class :class:`Shrubbery`, and this is necessary
+ to declare the type at compile time. To be able to ``cimport`` an extension type,
+ we split the class definition into two parts, one in a definition file and
+ the other in the corresponding implementation file. You should read
+ :ref:`sharing_extension_types` to learn to do that.
Type Testing and Casting
@@ -121,13 +131,13 @@
To access it's width I could write::
cdef Shrubbery sh = quest()
- print sh.width
+ print(sh.width)
which requires the use of a local variable and performs a type test on assignment.
If you *know* the return value of :meth:`quest` will be of type :class:`Shrubbery`
you can use a cast to write::
- print (