HbScorez is a web application, which processes handball game reports of diverse handball associations, districts, and leagues. It analyzes the player scores and displays the statistics and rankings.
Changelog
### 2.2.4
```
==========================
*August 1, 2019*
Django 2.2.4 fixes security issues and several bugs in 2.2.3.
CVE-2019-14232: Denial-of-service possibility in ``django.utils.text.Truncator``
================================================================================
If ``django.utils.text.Truncator``'s ``chars()`` and ``words()`` methods
were passed the ``html=True`` argument, they were extremely slow to evaluate
certain inputs due to a catastrophic backtracking vulnerability in a regular
expression. The ``chars()`` and ``words()`` methods are used to implement the
:tfilter:`truncatechars_html` and :tfilter:`truncatewords_html` template
filters, which were thus vulnerable.
The regular expressions used by ``Truncator`` have been simplified in order to
avoid potential backtracking issues. As a consequence, trailing punctuation may
now at times be included in the truncated output.
CVE-2019-14233: Denial-of-service possibility in ``strip_tags()``
=================================================================
Due to the behavior of the underlying ``HTMLParser``,
:func:`django.utils.html.strip_tags` would be extremely slow to evaluate
certain inputs containing large sequences of nested incomplete HTML entities.
The ``strip_tags()`` method is used to implement the corresponding
:tfilter:`striptags` template filter, which was thus also vulnerable.
``strip_tags()`` now avoids recursive calls to ``HTMLParser`` when progress
removing tags, but necessarily incomplete HTML entities, stops being made.
Remember that absolutely NO guarantee is provided about the results of
``strip_tags()`` being HTML safe. So NEVER mark safe the result of a
``strip_tags()`` call without escaping it first, for example with
:func:`django.utils.html.escape`.
CVE-2019-14234: SQL injection possibility in key and index lookups for ``JSONField``/``HStoreField``
====================================================================================================
:lookup:`Key and index lookups <jsonfield.key>` for
:class:`~django.contrib.postgres.fields.JSONField` and :lookup:`key lookups
<hstorefield.key>` for :class:`~django.contrib.postgres.fields.HStoreField`
were subject to SQL injection, using a suitably crafted dictionary, with
dictionary expansion, as the ``**kwargs`` passed to ``QuerySet.filter()``.
CVE-2019-14235: Potential memory exhaustion in ``django.utils.encoding.uri_to_iri()``
=====================================================================================
If passed certain inputs, :func:`django.utils.encoding.uri_to_iri` could lead
to significant memory usage due to excessive recursion when re-percent-encoding
invalid UTF-8 octet sequences.
``uri_to_iri()`` now avoids recursion when re-percent-encoding invalid UTF-8
octet sequences.
Bugfixes
========
* Fixed a regression in Django 2.2 when ordering a ``QuerySet.union()``,
``intersection()``, or ``difference()`` by a field type present more than
once results in the wrong ordering being used (:ticket:`30628`).
* Fixed a migration crash on PostgreSQL when adding a check constraint
with a ``contains`` lookup on
:class:`~django.contrib.postgres.fields.DateRangeField` or
:class:`~django.contrib.postgres.fields.DateTimeRangeField`, if the right
hand side of an expression is the same type (:ticket:`30621`).
* Fixed a regression in Django 2.2 where auto-reloader crashes if a file path
contains nulls characters (``'\x00'``) (:ticket:`30506`).
* Fixed a regression in Django 2.2 where auto-reloader crashes if a translation
directory cannot be resolved (:ticket:`30647`).
==========================
```
### 2.2.3
```
==========================
*July 1, 2019*
Django 2.2.3 fixes a security issue and several bugs in 2.2.2. Also, the latest
string translations from Transifex are incorporated.
CVE-2019-12781: Incorrect HTTP detection with reverse-proxy connecting via HTTPS
--------------------------------------------------------------------------------
When deployed behind a reverse-proxy connecting to Django via HTTPS,
:attr:`django.http.HttpRequest.scheme` would incorrectly detect client
requests made via HTTP as using HTTPS. This entails incorrect results for
:meth:`~django.http.HttpRequest.is_secure`, and
:meth:`~django.http.HttpRequest.build_absolute_uri`, and that HTTP
requests would not be redirected to HTTPS in accordance with
:setting:`SECURE_SSL_REDIRECT`.
``HttpRequest.scheme`` now respects :setting:`SECURE_PROXY_SSL_HEADER`, if it is
configured, and the appropriate header is set on the request, for both HTTP and
HTTPS requests.
If you deploy Django behind a reverse-proxy that forwards HTTP requests, and
that connects to Django via HTTPS, be sure to verify that your application
correctly handles code paths relying on ``scheme``, ``is_secure()``,
``build_absolute_uri()``, and ``SECURE_SSL_REDIRECT``.
Bugfixes
========
* Fixed a regression in Django 2.2 where :class:`~django.db.models.Avg`,
:class:`~django.db.models.StdDev`, and :class:`~django.db.models.Variance`
crash with ``filter`` argument (:ticket:`30542`).
* Fixed a regression in Django 2.2.2 where auto-reloader crashes with
``AttributeError``, e.g. when using ``ipdb`` (:ticket:`30588`).
==========================
```
Links
- PyPI: https://pypi.org/project/django
- Changelog: https://pyup.io/changelogs/django/
- Homepage: https://www.djangoproject.com/
Changelog
### 4.4.0
```
==================
Features added
--------------
* ``Element.clear()`` accepts a new keyword argument ``keep_tail=True`` to
clear everything but the tail text. This is helpful in some document-style
use cases.
* When creating attributes or namespaces from a dict in Python 3.6+, lxml now
preserves the original insertion order of that dict, instead of always sorting
the items by name. A similar change was made for ElementTree in CPython 3.8.
See https://bugs.python.org/issue34160
* Integer elements in ``lxml.objectify`` implement the ``__index__()`` special method.
* GH269: Read-only elements in XSLT were missing the ``nsmap`` property.
Original patch by Jan Pazdziora.
* ElementInclude can now restrict the maximum inclusion depth via a ``max_depth``
argument to prevent content explosion. It is limited to 6 by default.
* The ``target`` object of the XMLParser can have ``start_ns()`` and ``end_ns()``
callback methods to listen to namespace declarations.
* The ``TreeBuilder`` has new arguments ``comment_factory`` and ``pi_factory`` to
pass factories for creating comments and processing instructions, as well as
flag arguments ``insert_comments`` and ``insert_pis`` to discard them from the
tree when set to false.
* A `C14N 2.0 <https://www.w3.org/TR/xml-c14n2/>`_ implementation was added as
``etree.canonicalize()``, a corresponding ``C14NWriterTarget`` class, and
a ``c14n2`` serialisation method.
Bugs fixed
----------
* When writing to file paths that contain the URL escape character '%', the file
path could wrongly be mangled by URL unescaping and thus write to a different
file or directory. Code that writes to file paths that are provided by untrusted
sources, but that must work with previous versions of lxml, should best either
reject paths that contain '%' characters, or otherwise make sure that the path
does not contain maliciously injected '%XX' URL hex escapes for paths like '../'.
* Assigning to Element child slices with negative step could insert the slice at
the wrong position, starting too far on the left.
* Assigning to Element child slices with overly large step size could take very
long, regardless of the length of the actual slice.
* Assigning to Element child slices of the wrong size could sometimes fail to
raise a ValueError (like a list assignment would) and instead assign outside
of the original slice bounds or leave parts of it unreplaced.
* The ``comment`` and ``pi`` events in ``iterwalk()`` were never triggered, and
instead, comments and processing instructions in the tree were reported as
``start`` elements. Also, when walking an ElementTree (as opposed to its root
element), comments and PIs outside of the root element are now reported.
* LP1827833: The RelaxNG compact syntax support was broken with recent versions
of ``rnc2rng``.
* LP1758553: The HTML elements ``source`` and ``track`` were added to the list
of empty tags in ``lxml.html.defs``.
* Registering a prefix other than "xml" for the XML namespace is now rejected.
* Failing to write XSLT output to a file could raise a misleading exception.
It now raises ``IOError``.
Other changes
-------------
* Support for Python 3.4 was removed.
* When using ``Element.find*()`` with prefix-namespace mappings, the empty string
is now accepted to define a default namespace, in addition to the previously
supported ``None`` prefix. Empty strings are more convenient since they keep
all prefix keys in a namespace dict strings, which simplifies sorting etc.
* The ``ElementTree.write_c14n()`` method has been deprecated in favour of the
long preferred ``ElementTree.write(f, method="c14n")``. It will be removed
in a future release.
```
### 4.3.5
```
==================
* Rebuilt with Cython 0.29.13 to support Python 3.8.
```
Links
- PyPI: https://pypi.org/project/lxml
- Changelog: https://pyup.io/changelogs/lxml/
- Homepage: http://lxml.de/
Changelog
### 1.17.0
```
==========================
This NumPy release contains a number of new features that should substantially
improve its performance and usefulness, see Highlights below for a summary. The
Python versions supported are 3.5-3.7, note that Python 2.7 has been dropped.
Python 3.8b1 should work with the released source packages, but there are no
future guarantees.
Downstream developers should use Cython >= 0.29.10 for Python 3.8 support and
OpenBLAS >= 3.7 (not currently out) to avoid problems on the Skylake
architecture. The NumPy wheels on PyPI are built from the OpenBLAS development
branch in order to avoid those problems.
Highlights
==========
* A new extensible random module along with four selectable random number
generators and improved seeding designed for use in parallel processes has
been added. The currently available bit generators are MT19937, PCG64,
Philox, and SFC64. See below under New Features.
* NumPy's FFT implementation was changed from fftpack to pocketfft, resulting
in faster, more accurate transforms and better handling of datasets of
prime length. See below under Improvements.
* New radix sort and timsort sorting methods. It is currently not possible to
choose which will be used, but they are hardwired to the datatype and used
when either ``stable`` or ``mergesort`` is passed as the method. See below
under Improvements.
* Overriding numpy functions is now possible by default,
see ``__array_function__`` below.
New functions
=============
* `numpy.errstate` is now also a function decorator
Deprecations
============
``np.polynomial`` functions warn when passed ``float`` in place of ``int``
--------------------------------------------------------------------------
Previously functions in this module would accept ``float`` values provided they
were integral (``1.0``, ``2.0``, etc). For consistency with the rest of numpy,
doing so is now deprecated, and in future will raise a ``TypeError``.
Similarly, passing a float like ``0.5`` in place of an integer will now raise a
``TypeError`` instead of the previous ``ValueError``.
Deprecate ``numpy.distutils.exec_command`` and ``numpy.distutils.temp_file_name``
---------------------------------------------------------------------------------
The internal use of these functions has been refactored and there are better
alternatives. Relace ``exec_command`` with `subprocess.Popen` and
``temp_file_name`` with `tempfile.mkstemp`.
Writeable flag of C-API wrapped arrays
--------------------------------------
When an array is created from the C-API to wrap a pointer to data, the only
indication we have of the read-write nature of the data is the ``writeable``
flag set during creation. It is dangerous to force the flag to writeable.
In the future it will not be possible to switch the writeable flag to ``True``
from python.
This deprecation should not affect many users since arrays created in such
a manner are very rare in practice and only available through the NumPy C-API.
`numpy.nonzero` should no longer be called on 0d arrays
-------------------------------------------------------
The behavior of nonzero on 0d arrays was surprising, making uses of it almost
always incorrect. If the old behavior was intended, it can be preserved without
a warning by using ``nonzero(atleast_1d(arr))`` instead of ``nonzero(arr)``.
In a future release, it is most likely this will raise a `ValueError`.
Writing to the result of `numpy.broadcast_arrays` will warn
-----------------------------------------------------------
Commonly `numpy.broadcast_arrays` returns a writeable array with internal
overlap, making it unsafe to write to. A future version will set the
``writeable`` flag to ``False``, and require users to manually set it to
``True`` if they are sure that is what they want to do. Now writing to it will
emit a deprecation warning with instructions to set the ``writeable`` flag
``True``. Note that if one were to inspect the flag before setting it, one
would find it would already be ``True``. Explicitly setting it, though, as one
will need to do in future versions, clears an internal flag that is used to
produce the deprecation warning. To help alleviate confusion, an additional
`FutureWarning` will be emitted when accessing the ``writeable`` flag state to
clarify the contradiction.
Future Changes
==============
Shape-1 fields in dtypes won't be collapsed to scalars in a future version
--------------------------------------------------------------------------
Currently, a field specified as ``[(name, dtype, 1)]`` or ``"1type"`` is
interpreted as a scalar field (i.e., the same as ``[(name, dtype)]`` or
``[(name, dtype, ()]``). This now raises a FutureWarning; in a future version,
it will be interpreted as a shape-(1,) field, i.e. the same as ``[(name,
dtype, (1,))]`` or ``"(1,)type"`` (consistently with ``[(name, dtype, n)]``
/ ``"ntype"`` with ``n>1``, which is already equivalent to ``[(name, dtype,
(n,)]`` / ``"(n,)type"``).
Compatibility notes
===================
float16 subnormal rounding
--------------------------
Casting from a different floating point precision to float16 used incorrect
rounding in some edge cases. This means in rare cases, subnormal results will
now be rounded up instead of down, changing the last bit (ULP) of the result.
Signed zero when using divmod
-----------------------------
Starting in version 1.12.0, numpy incorrectly returned a negatively signed zero
when using the ``divmod`` and ``floor_divide`` functions when the result was
zero. For example::
>>> np.zeros(10)//1
array([-0., -0., -0., -0., -0., -0., -0., -0., -0., -0.])
With this release, the result is correctly returned as a positively signed
zero::
>>> np.zeros(10)//1
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
``MaskedArray.mask`` now returns a view of the mask, not the mask itself
------------------------------------------------------------------------
Returning the mask itself was unsafe, as it could be reshaped in place which
would violate expectations of the masked array code. It's behavior is now
consistent with the ``.data`` attribute, which also returns a view.
The underlying mask can still be accessed with ``._mask`` if it is needed.
Tests that contain ``assert x.mask is not y.mask`` or similar will need to be
updated.
Do not lookup ``__buffer__`` attribute in `numpy.frombuffer`
------------------------------------------------------------
Looking up ``__buffer__`` attribute in `numpy.frombuffer` was undocumented and
non-functional. This code was removed. If needed, use
``frombuffer(memoryview(obj), ...)`` instead.
``out``is buffered for memory overlaps in ``np.take``, ``np.choose``, ``np.put``
--------------------------------------------------------------------------------
If the out argument to these functions is provided and has memory overlap with
the other arguments, it is now buffered to avoid order-dependent behavior.
Unpickling while loading requires explicit opt-in
-------------------------------------------------
The functions ``np.load``, and ``np.lib.format.read_array`` take an
``allow_pickle`` keyword which now defaults to ``False`` in response to
`CVE-2019-6446 <https://nvd.nist.gov/vuln/detail/CVE-2019-6446>`_.
Potential changes to the random stream in old random module
-----------------------------------------------------------
Due to bugs in the application of log to random floating point numbers,
the stream may change when sampling from ``np.random.beta``, ``np.random.binomial``,
``np.random.laplace``, ``np.random.logistic``, ``np.random.logseries`` or
``np.random.multinomial`` if a 0 is generated in the underlying MT19937 random stream.
There is a 1 in :math:`10^{53}` chance of this occurring, and so the probability that
the stream changes for any given seed is extremely small. If a 0 is encountered in the
underlying generator, then the incorrect value produced (either ``np.inf``
or ``np.nan``) is now dropped.
``i0`` now always returns a result with the same shape as the input
-------------------------------------------------------------------
Previously, the output was squeezed, such that, e.g., input with just a single
element would lead to an array scalar being returned, and inputs with shapes
such as ``(10, 1)`` would yield results that would not broadcast against the
input.
Note that we generally recommend the SciPy implementation over the numpy one:
it is a proper ufunc written in C, and more than an order of magnitude faster.
``np.can_cast`` no longer assumes all unsafe casting is allowed
---------------------------------------------------------------
Previously, ``can_cast`` returned `True` for almost all inputs for
``casting='unsafe'``, even for cases where casting was not possible, such as
from a structured dtype to a regular one. This has been fixed, making it
more consistent with actual casting using, e.g., the ``.astype`` method.
``arr.writeable`` can be switched to true slightly more often
-------------------------------------------------------------
In rare cases, it was not possible to switch an array from not writeable
to writeable, although a base array is writeable. This can happen if an
intermediate ``arr.base`` object is writeable. Previously, only the deepest
base object was considered for this decision. However, in rare cases this
object does not have the necessary information. In that case switching to
writeable was never allowed. This has now been fixed.
C API changes
=============
dimension or stride input arguments are now passed by ``npy_intp const*``
-------------------------------------------------------------------------
Previously these function arguments were declared as the more strict
``npy_intp*``, which prevented the caller passing constant data.
This change is backwards compatible, but now allows code like::
npy_intp const fixed_dims[] = {1, 2, 3};
// no longer complains that the const-qualifier is discarded
npy_intp size = PyArray_MultiplyList(fixed_dims, 3);
New Features
============
New extensible random module with selectable random number generators
---------------------------------------------------------------------
A new extensible random module along with four selectable random number
generators and improved seeding designed for use in parallel processes has been
added. The currently available bit generators are MT19937, PCG64, Philox, and
SFC64. PCG64 is the new default while MT19937 is retained for backwards
compatibility. Note that the legacy random module is unchanged and is now
frozen, your current results will not change. Extensive documentation for the
new module is available online at
`NumPy devdocs <http://www.numpy.org/devdocs/reference/random/index.html>`_.
libFLAME
--------
Support for building NumPy with the libFLAME linear algebra package as the LAPACK,
implementation, see
`libFLAME <https://www.cs.utexas.edu/~flame/web/libFLAME.html>`_ for details.
User-defined BLAS detection order
---------------------------------
``numpy.distutils`` now uses an environment variable, comma-separated and case
insensitive, to determine the detection order for BLAS libraries.
By default ``NPY_BLAS_ORDER=mkl,blis,openblas,atlas,accelerate,blas``.
However, to force the use of OpenBLAS simply do::
NPY_BLAS_ORDER=openblas python setup.py build
which forces the use of OpenBLAS.
This may be helpful for users which have a MKL installation but wishes to try
out different implementations.
User-defined LAPACK detection order
-----------------------------------
``numpy.distutils`` now uses an environment variable, comma-separated and case
insensitive, to determine the detection order for LAPACK libraries.
By default ``NPY_BLAS_ORDER=mkl,openblas,flame,atlas,accelerate,lapack``.
However, to force the use of OpenBLAS simply do::
NPY_LAPACK_ORDER=openblas python setup.py build
which forces the use of OpenBLAS.
This may be helpful for users which have a MKL installation but wishes to try
out different implementations.
``np.ufunc.reduce`` and related functions now accept a ``where`` mask
---------------------------------------------------------------------
``np.ufunc.reduce``, ``np.sum``, ``np.prod``, ``np.min``, ``np.max`` all
now accept a ``where`` keyword argument, which can be used to tell which
elements to include in the reduction. For reductions that do not have an
identity, it is necessary to also pass in an initial value (e.g.,
``initial=np.inf`` for ``np.min``). For instance, the equivalent of
``nansum`` would be, ``np.sum(a, where=~np.isnan(a))``.
Timsort and radix sort have replaced mergesort for stable sorting
-----------------------------------------------------------------
Both radix sort and timsort have been implemented and are now used in place of
mergesort. Due to the need to maintain backward compatibility, the sorting
``kind`` options ``"stable"`` and ``"mergesort"`` have been made aliases of
each other with the actual sort implementation depending on the array type.
Radix sort is used for small integer types of 16 bits or less and timsort for
the remaining types. Timsort features improved performace on data containing
already or nearly sorted data and performs like mergesort on random data and
requires O(n/2) working space. Details of the timsort algorithm can be found
at
`CPython listsort.txt <https://github.com/python/cpython/blob/3.7/Objects/listsort.txt>`_.
``np.unpackbits`` now accepts a ``count`` parameter
---------------------------------------------------
``count`` allows subsetting the number of bits that will be unpacked up-front,
rather than reshaping and subsetting later, making the ``packbits`` operation
invertible, and the unpacking less wasteful. Counts larger than the number of
available bits add zero padding. Negative counts trim bits off the end instead
of counting from the beginning. None counts implement the existing behavior of
unpacking everything.
``np.linalg.svd`` and ``np.linalg.pinv`` can be faster on hermitian inputs
--------------------------------------------------------------------------
These functions now accept a ``hermitian`` argument, matching the one added
to ``np.linalg.matrix_rank`` in 1.14.0.
divmod operation is now supported for two ``timedelta64`` operands
------------------------------------------------------------------
The divmod operator now handles two ``np.timedelta64`` operands, with
type signature mm->qm.
``np.fromfile`` now takes an ``offset`` argument
------------------------------------------------
This function now takes an ``offset`` keyword argument for binary files,
which specifics the offset (in bytes) from the file's current position.
Defaults to 0.
New mode "empty" for ``np.pad``
-------------------------------
This mode pads an array to a desired shape without initializing the new
entries.
``np.empty_like`` and related functions now accept a ``shape`` argument
-----------------------------------------------------------------------
``np.empty_like``, ``np.full_like``, ``np.ones_like`` and ``np.zeros_like`` now
accept a ``shape`` keyword argument, which can be used to create a new array
as the prototype, overriding its shape as well. This is particularly useful
when combined with the ``__array_function__`` protocol, allowing the creation
of new arbitrary-shape arrays from NumPy-like libraries when such an array
is used as the prototype.
Floating point scalars implement ``as_integer_ratio`` to match the builtin float
--------------------------------------------------------------------------------
This returns a (numerator, denominator) pair, which can be used to construct a
`fractions.Fraction`.
Structured ``dtype`` objects can be indexed with multiple fields names
----------------------------------------------------------------------
``arr.dtype[['a', 'b']]`` now returns a dtype that is equivalent to
``arr[['a', 'b']].dtype``, for consistency with
``arr.dtype['a'] == arr['a'].dtype``.
Like the dtype of structured arrays indexed with a list of fields, this dtype
has the same ``itemsize`` as the original, but only keeps a subset of the fields.
This means that ``arr[['a', 'b']]`` and ``arr.view(arr.dtype[['a', 'b']])`` are
equivalent.
``.npy`` files support unicode field names
------------------------------------------
A new format version of 3.0 has been introduced, which enables structured types
with non-latin1 field names. This is used automatically when needed.
`numpy.packbits` and `numpy.unpackbits` accept an ``order`` keyword
-------------------------------------------------------------------
The ``order`` keyword defaults to ``big``, and will order the **bits**
accordingly. For ``'big'`` 3 will become ``[0, 0, 0, 0, 0, 0, 1, 1]``, and
``[1, 1, 0, 0, 0, 0, 0, 0]`` for ``little``
Improvements
============
Array comparison assertions include maximum differences
-------------------------------------------------------
Error messages from array comparison tests such as
`np.testing.assert_allclose` now include "max absolute difference" and
"max relative difference," in addition to the previous "mismatch" percentage.
This information makes it easier to update absolute and relative error
tolerances.
Replacement of the fftpack based FFT module by the pocketfft library
--------------------------------------------------------------------
Both implementations have the same ancestor (Fortran77 FFTPACK by Paul N.
Swarztrauber), but pocketfft contains additional modifications which improve
both accuracy and performance in some circumstances. For FFT lengths containing
large prime factors, pocketfft uses Bluestein's algorithm, which maintains
``O(N log N)`` run time complexity instead of deteriorating towards ``O(N*N)``
for prime lengths. Also, accuracy for real valued FFTs with near prime lengths
has improved and is on par with complex valued FFTs.
Further improvements to ``ctypes`` support in `numpy.ctypeslib`
---------------------------------------------------------------
A new `numpy.ctypeslib.as_ctypes_type` function has been added, which can be
used to converts a ``dtype`` into a best-guess ``ctypes`` type. Thanks to this
new function, `numpy.ctypeslib.as_ctypes` now supports a much wider range of
array types, including structures, booleans, and integers of non-native
endianness.
`numpy.errstate` is now also a function decorator
-------------------------------------------------
Currently, if you have a function like::
def foo():
pass
and you want to wrap the whole thing in ``errstate``, you have to rewrite it
like so::
def foo():
with np.errstate(...):
pass
but with this change, you can do::
np.errstate(...)
def foo():
pass
thereby saving a level of indentation
`numpy.exp` and `numpy.log` speed up for float32 implementation
---------------------------------------------------------------
float32 implementation of numpy.exp and numpy.log now benefit from AVX2/AVX512
instruction set which are detected during runtime. numpy.exp has a max ulp
error of 2.52 and numpy.log has a max ulp error or 3.83.
Improve performance of `numpy.pad`
----------------------------------
The performance of the function has been improved for most cases by filling in
a preallocated array with the desired padded shape instead of using
concatenation.
`numpy.interp` handles infinities more robustly
-----------------------------------------------
In some cases where ``np.interp`` would previously return ``np.nan``, it now
returns an appropriate infinity.
Pathlib support for ``np.fromfile``, ``ndarray.tofile`` and ``ndarray.dump``
----------------------------------------------------------------------------
``np.fromfile``, ``np.ndarray.tofile`` and ``np.ndarray.dump`` now support
the `pathlib.Path` type for the ``file``/``fid`` parameter.
Specialized ``np.isnan``, ``np.isinf``, and ``np.isfinite`` ufuncs for bool and int types
-----------------------------------------------------------------------------------------
The boolean and integer types are incapable of storing ``np.nan`` and
``np.inf`` values, which allows us to provide specialized ufuncs that are up to
250x faster than the current approach.
``np.isfinite`` supports ``datetime64`` and ``timedelta64`` types
-----------------------------------------------------------------
Previously, `np.isfinite` used to raise a ``TypeError`` on being used on these
two types.
New keywords added to ``np.nan_to_num``
---------------------------------------
``np.nan_to_num`` now accepts keywords ``nan``, ``posinf`` and ``neginf``
allowing the user to define the value to replace the ``nan``, positive and
negative ``np.inf`` values respectively.
MemoryErrors caused by allocated overly large arrays are more descriptive
-------------------------------------------------------------------------
Often the cause of a MemoryError is incorrect broadcasting, which results in a
very large and incorrect shape. The message of the error now includes this
shape to help diagnose the cause of failure.
`floor`, `ceil`, and `trunc` now respect builtin magic methods
--------------------------------------------------------------
These ufuncs now call the ``__floor__``, ``__ceil__``, and ``__trunc__``
methods when called on object arrays, making them compatible with
`decimal.Decimal` and `fractions.Fraction` objects.
``quantile`` now works on ``fraction.Fraction`` and ``decimal.Decimal`` objects
-------------------------------------------------------------------------------
In general, this handles object arrays more gracefully, and avoids floating-
point operations if exact arithmetic types are used.
Support of object arrays in ``np.matmul``
-----------------------------------------
It is now possible to use ``np.matmul`` (or the ` operator) with object arrays.
For instance, it is now possible to do::
from fractions import Fraction
a = np.array([[Fraction(1, 2), Fraction(1, 3)], [Fraction(1, 3), Fraction(1, 2)]])
b = a a
Changes
=======
``median`` and ``percentile`` family of functions no longer warn about ``nan``
------------------------------------------------------------------------------
`numpy.median`, `numpy.percentile`, and `numpy.quantile` used to emit a
``RuntimeWarning`` when encountering an `numpy.nan`. Since they return the
``nan`` value, the warning is redundant and has been removed.
``timedelta64 % 0`` behavior adjusted to return ``NaT``
-------------------------------------------------------
The modulus operation with two ``np.timedelta64`` operands now returns
``NaT`` in the case of division by zero, rather than returning zero
NumPy functions now always support overrides with ``__array_function__``
------------------------------------------------------------------------
NumPy now always checks the ``__array_function__`` method to implement overrides
of NumPy functions on non-NumPy arrays, as described in `NEP 18`_. The feature
was available for testing with NumPy 1.16 if appropriate environment variables
are set, but is now always enabled.
.. _`NEP 18` : http://www.numpy.org/neps/nep-0018-array-function-protocol.html
`numpy.lib.recfunctions.structured_to_unstructured` does not squeeze single-field views
---------------------------------------------------------------------------------------
Previously ``structured_to_unstructured(arr[['a']])`` would produce a squeezed
result inconsistent with ``structured_to_unstructured(arr[['a', b']])``. This
was accidental. The old behavior can be retained with
``structured_to_unstructured(arr[['a']]).squeeze(axis=-1)`` or far more simply,
``arr['a']``.
``clip`` now uses a ufunc under the hood
----------------------------------------
This means that registering clip functions for custom dtypes in C via
``descr->f->fastclip`` is deprecated - they should use the ufunc registration
mechanism instead, attaching to the ``np.core.umath.clip`` ufunc.
It also means that ``clip`` accepts ``where`` and ``casting`` arguments,
and can be override with ``__array_ufunc__``.
A consequence of this change is that some behaviors of the old ``clip`` have
been deprecated:
* Passing ``nan`` to mean "do not clip" as one or both bounds. This didn't work
in all cases anyway, and can be better handled by passing infinities of the
appropriate sign.
* Using "unsafe" casting by default when an ``out`` argument is passed. Using
``casting="unsafe"`` explicitly will silence this warning.
Additionally, there are some corner cases with behavior changes:
* Padding ``max < min`` has changed to be more consistent across dtypes, but
should not be relied upon.
* Scalar ``min`` and ``max`` take part in promotion rules like they do in all
other ufuncs.
``__array_interface__`` offset now works as documented
------------------------------------------------------
The interface may use an ``offset`` value that was mistakenly ignored.
Pickle protocol in ``np.savez`` set to 3 for ``force zip64`` flag
-----------------------------------------------------------------
``np.savez`` was not using the ``force_zip64`` flag, which limited the size of
the archive to 2GB. But using the flag requires us to use pickle protocol 3 to
write ``object`` arrays. The protocol used was bumped to 3, meaning the archive
will be unreadable by Python2.
Structured arrays indexed with non-existent fields raise ``KeyError`` not ``ValueError``
----------------------------------------------------------------------------------------
``arr['bad_field']`` on a structured type raises ``KeyError``, for consistency
with ``dict['bad_field']``.
.. _`NEP 18` : http://www.numpy.org/neps/nep-0018-array-function-protocol.html
=========================
```
Links
- PyPI: https://pypi.org/project/numpy
- Changelog: https://pyup.io/changelogs/numpy/
- Homepage: https://www.numpy.org
Changelog
### 1.4.0
```
This release change `guess` option behavior because of tabula-java fix. Choose `guess` option if you don't set `area` option. Now `stream` and `lattice` option can be used along with `guess` option together.
Also, drop Python 2.7, 3.4 support. See 167 for more details.
See Google Colab notebook for example.
- Bump tabula-java 1.0.3 167
- Drop Python 2.7, 3.4 167
- Add User-agent handling 151
- Introduce JAR path environment variable 136
- Enforce silent option 132
```
Links
- PyPI: https://pypi.org/project/tabula-py
- Changelog: https://pyup.io/changelogs/tabula-py/
- Repo: https://github.com/chezou/tabula-py
Changelog
### 3.7.8
```
-------------------
You can view the `3.7.8 milestone`_ on GitLab for more details.
Bugs Fixed
~~~~~~~~~~
- Fix handling of ``Application.parse_preliminary_options_and_args`` when
argv is an empty list (See also `GitLab!310`_, `GitLab518`_)
- Fix crash when a file parses but fails to tokenize (See also `GitLab!314`_,
`GitLab532`_)
- Log the full traceback on plugin exceptions (See also `GitLab!317`_)
- Fix `` noqa: ...`` comments with multi-letter codes (See also `GitLab!326`_,
`GitLab549`_)
.. all links
.. _3.7.8 milestone:
https://gitlab.com/pycqa/flake8/milestones/31
.. issue links
.. _GitLab518:
https://gitlab.com/pycqa/flake8/issues/518
.. _GitLab532:
https://gitlab.com/pycqa/flake8/issues/532
.. _GitLab549:
https://gitlab.com/pycqa/flake8/issues/549
.. merge request links
.. _GitLab!310:
https://gitlab.com/pycqa/flake8/merge_requests/310
.. _GitLab!314:
https://gitlab.com/pycqa/flake8/merge_requests/314
.. _GitLab!317:
https://gitlab.com/pycqa/flake8/merge_requests/317
.. _GitLab!326:
https://gitlab.com/pycqa/flake8/merge_requests/326
```
Links
- PyPI: https://pypi.org/project/flake8
- Changelog: https://pyup.io/changelogs/flake8/
- Repo: https://gitlab.com/pycqa/flake8
Changelog
### 2.1.12
```
==============================
* Multi-value support and interface improvements for Git configuration. Thanks to A. Jesse Jiryu Davis.
see the following for (most) details:
https://github.com/gitpython-developers/gitpython/milestone/27?closed=1
or run have a look at the difference between tags v2.1.11 and v2.1.12:
https://github.com/gitpython-developers/GitPython/compare/2.1.11...2.1.12
```
Links
- PyPI: https://pypi.org/project/gitpython
- Changelog: https://pyup.io/changelogs/gitpython/
- Repo: https://github.com/gitpython-developers/GitPython
- Docs: https://pythonhosted.org/GitPython/
Changelog
### 0.14.1
```
+++++++++++++++++++
- CallSignature.index should now be working a lot better
- A couple of smaller bugfixes
```
### 0.14.0
```
+++++++++++++++++++
- Added ``goto_*(prefer_stubs=True)`` as well as ``goto_*(prefer_stubs=True)``
- Stubs are used now for type inference
- Typeshed is used for better type inference
- Reworked Definition.full_name, should have more correct return values
```
### 0.13.4
```
====================
Changes
-------
* fix duplication in function parameters completion (267)
```
Links
- PyPI: https://pypi.org/project/jedi
- Changelog: https://pyup.io/changelogs/jedi/
- Repo: https://github.com/davidhalter/jedi
Changelog
### 0.5.1
```
++++++++++++++++++
- Fix: Some unicode identifiers were not correctly tokenized
- Fix: Line continuations in f-strings are now working
```
### 0.5.0
```
++++++++++++++++++
- **Breaking Change** comp_for is now called sync_comp_for for all Python
versions to be compatible with the Python 3.8 Grammar
- Added .pyi stubs for a lot of the parso API
- Small FileIO changes
```
Links
- PyPI: https://pypi.org/project/parso
- Changelog: https://pyup.io/changelogs/parso/
- Repo: https://github.com/davidhalter/parso
Changelog
### 2.4.2
```
- API change adding support for `expr[...]` - the original
code in 2.4.1 incorrectly implemented this as OneOrMore.
Code using this feature under this relase should explicitly
use `expr[0, ...]` for ZeroOrMore and `expr[1, ...]` for
OneOrMore. In 2.4.2 you will be able to write `expr[...]`
equivalent to `ZeroOrMore(expr)`.
- Bug if composing And, Or, MatchFirst, or Each expressions
using an expression. This only affects code which uses
explicit expression construction using the And, Or, etc.
classes instead of using overloaded operators '+', '^', and
so on. If constructing an And using a single expression,
you may get an error that "cannot multiply ParserElement by
0 or (0, 0)" or a Python `IndexError`. Change code like
cmd = Or(Word(alphas))
to
cmd = Or([Word(alphas)])
(Note that this is not the recommended style for constructing
Or expressions.)
- Some newly-added `__diag__` switches are enabled by default,
which may give rise to noisy user warnings for existing parsers.
You can disable them using:
import pyparsing as pp
pp.__diag__.warn_multiple_tokens_in_named_alternation = False
pp.__diag__.warn_ungrouped_named_tokens_in_collection = False
pp.__diag__.warn_name_set_on_empty_Forward = False
pp.__diag__.warn_on_multiple_string_args_to_oneof = False
pp.__diag__.enable_debug_on_named_expressions = False
In 2.4.2 these will all be set to False by default.
```
### 2.4.2a1
```
----------------------------
It turns out I got the meaning of `[...]` absolutely backwards,
so I've deleted 2.4.1 and am repushing this release as 2.4.2a1
for people to give it a try before I can call it ready to go.
The `expr[...]` notation was pushed out to be synonymous with
`OneOrMore(expr)`, but this is really counter to most Python
notations (and even other internal pyparsing notations as well).
It should have been defined to be equivalent to ZeroOrMore(expr).
- Changed [...] to emit ZeroOrMore instead of OneOrMore.
- Removed code that treats ParserElements like iterables.
- Change all __diag__ switches to False.
```
### 2.4.1.1
```
-------------------------------
This is a re-release of version 2.4.1 to restore the release history
in PyPI, since the 2.4.1 release was deleted.
There are 3 known issues in this release, which are fixed in
```
### 2.4.1
```
--------------------------
- NOTE: Deprecated functions and features that will be dropped
in pyparsing 2.5.0 (planned next release):
. support for Python 2 - ongoing users running with
Python 2 can continue to use pyparsing 2.4.1
. ParseResults.asXML() - if used for debugging, switch
to using ParseResults.dump(); if used for data transfer,
use ParseResults.asDict() to convert to a nested Python
dict, which can then be converted to XML or JSON or
other transfer format
. operatorPrecedence synonym for infixNotation -
convert to calling infixNotation
. commaSeparatedList - convert to using
pyparsing_common.comma_separated_list
. upcaseTokens and downcaseTokens - convert to using
pyparsing_common.upcaseTokens and downcaseTokens
. __compat__.collect_all_And_tokens will not be settable to
False to revert to pre-2.3.1 results name behavior -
review use of names for MatchFirst and Or expressions
containing And expressions, as they will return the
complete list of parsed tokens, not just the first one.
Use __diag__.warn_multiple_tokens_in_named_alternation
(described below) to help identify those expressions
in your parsers that will have changed as a result.
- A new shorthand notation has been added for repetition
expressions: expr[min, max], with '...' valid as a min
or max value:
- expr[...] is equivalent to OneOrMore(expr)
- expr[0, ...] is equivalent to ZeroOrMore(expr)
- expr[1, ...] is equivalent to OneOrMore(expr)
- expr[n, ...] or expr[n,] is equivalent
to expr*n + ZeroOrMore(expr)
(read as "n or more instances of expr")
- expr[..., n] is equivalent to expr*(0, n)
- expr[m, n] is equivalent to expr*(m, n)
Note that expr[..., n] and expr[m, n] do not raise an exception
if more than n exprs exist in the input stream. If this
behavior is desired, then write expr[..., n] + ~expr.
- '...' can also be used as short hand for SkipTo when used
in adding parse expressions to compose an And expression.
Literal('start') + ... + Literal('end')
And(['start', ..., 'end'])
are both equivalent to:
Literal('start') + SkipTo('end')("_skipped*") + Literal('end')
The '...' form has the added benefit of not requiring repeating
the skip target expression. Note that the skipped text is
returned with '_skipped' as a results name, and that the contents of
`_skipped` will contain a list of text from all `...`s in the expression.
- '...' can also be used as a "skip forward in case of error" expression:
expr = "start" + (Word(nums).setName("int") | ...) + "end"
expr.parseString("start 456 end")
['start', '456', 'end']
expr.parseString("start 456 foo 789 end")
['start', '456', 'foo 789 ', 'end']
- _skipped: ['foo 789 ']
expr.parseString("start foo end")
['start', 'foo ', 'end']
- _skipped: ['foo ']
expr.parseString("start end")
['start', '', 'end']
- _skipped: ['missing <int>']
Note that in all the error cases, the '_skipped' results name is
present, showing a list of the extra or missing items.
This form is only valid when used with the '|' operator.
- Improved exception messages to show what was actually found, not
just what was expected.
word = pp.Word(pp.alphas)
pp.OneOrMore(word).parseString("aaa bbb 123", parseAll=True)
Former exception message:
pyparsing.ParseException: Expected end of text (at char 8), (line:1, col:9)
New exception message:
pyparsing.ParseException: Expected end of text, found '1' (at char 8), (line:1, col:9)
- Added diagnostic switches to help detect and warn about common
parser construction mistakes, or enable additional parse
debugging. Switches are attached to the pyparsing.__diag__
namespace object:
- warn_multiple_tokens_in_named_alternation - flag to enable warnings when a results
name is defined on a MatchFirst or Or expression with one or more And subexpressions
(default=True)
- warn_ungrouped_named_tokens_in_collection - flag to enable warnings when a results
name is defined on a containing expression with ungrouped subexpressions that also
have results names (default=True)
- warn_name_set_on_empty_Forward - flag to enable warnings whan a Forward is defined
with a results name, but has no contents defined (default=False)
- warn_on_multiple_string_args_to_oneof - flag to enable warnings whan oneOf is
incorrectly called with multiple str arguments (default=True)
- enable_debug_on_named_expressions - flag to auto-enable debug on all subsequent
calls to ParserElement.setName() (default=False)
warn_multiple_tokens_in_named_alternation is intended to help
those who currently have set __compat__.collect_all_And_tokens to
False as a workaround for using the pre-2.3.1 code with named
MatchFirst or Or expressions containing an And expression.
- Added ParseResults.from_dict classmethod, to simplify creation
of a ParseResults with results names using a dict, which may be nested.
This makes it easy to add a sub-level of named items to the parsed
tokens in a parse action.
- Added asKeyword argument (default=False) to oneOf, to force
keyword-style matching on the generated expressions.
- ParserElement.runTests now accepts an optional 'file' argument to
redirect test output to a file-like object (such as a StringIO,
or opened file). Default is to write to sys.stdout.
- conditionAsParseAction is a helper method for constructing a
parse action method from a predicate function that simply
returns a boolean result. Useful for those places where a
predicate cannot be added using addCondition, but must be
converted to a parse action (such as in infixNotation). May be
used as a decorator if default message and exception types
can be used. See ParserElement.addCondition for more details
about the expected signature and behavior for predicate condition
methods.
- While investigating issue 93, I found that Or and
addCondition could interact to select an alternative that
is not the longest match. This is because Or first checks
all alternatives for matches without running attached
parse actions or conditions, orders by longest match, and
then rechecks for matches with conditions and parse actions.
Some expressions, when checking with conditions, may end
up matching on a shorter token list than originally matched,
but would be selected because of its original priority.
This matching code has been expanded to do more extensive
searching for matches when a second-pass check matches a
smaller list than in the first pass.
- Fixed issue 87, a regression in indented block.
Reported by Renz Bagaporo, who submitted a very nice repro
example, which makes the bug-fixing process a lot easier,
thanks!
- Fixed MemoryError issue 85 and 91 with str generation for
Forwards. Thanks decalage2 and Harmon758 for your patience.
- Modified setParseAction to accept None as an argument,
indicating that all previously-defined parse actions for the
expression should be cleared.
- Modified pyparsing_common.real and sci_real to parse reals
without leading integer digits before the decimal point,
consistent with Python real number formats. Original PR 98
submitted by ansobolev.
- Modified runTests to call postParse function before dumping out
the parsed results - allows for postParse to add further results,
such as indications of additional validation success/failure.
- Updated statemachine example: refactored state transitions to use
overridden classmethods; added <statename>Mixin class to simplify
definition of application classes that "own" the state object and
delegate to it to model state-specific properties and behavior.
- Added example nested_markup.py, showing a simple wiki markup with
nested markup directives, and illustrating the use of '...' for
skipping over input to match the next expression. (This example
uses syntax that is not valid under Python 2.)
- Rewrote delta_time.py example (renamed from deltaTime.py) to
fix some omitted formats and upgrade to latest pyparsing idioms,
beginning with writing an actual BNF.
- With the help and encouragement from several contributors, including
Matěj Cepl and Cengiz Kaygusuz, I've started cleaning up the internal
coding styles in core pyparsing, bringing it up to modern coding
practices from pyparsing's early development days dating back to
2003. Whitespace has been largely standardized along PEP8 guidelines,
removing extra spaces around parentheses, and adding them around
arithmetic operators and after colons and commas. I was going to hold
off on doing this work until after 2.4.1, but after cleaning up a
few trial classes, the difference was so significant that I continued
on to the rest of the core code base. This should facilitate future
work and submitted PRs, allowing them to focus on substantive code
changes, and not get sidetracked by whitespace issues.
```
Links
- PyPI: https://pypi.org/project/pyparsing
- Changelog: https://pyup.io/changelogs/pyparsing/
- Repo: https://github.com/pyparsing/pyparsing/
- Docs: https://pythonhosted.org/pyparsing/
Changelog
### 16.7.1
```
--------------------
Features
^^^^^^^^
- pip bumped to 19.2.1 (`1392 <https://github.com/pypa/virtualenv/issues/1392>`_)
```
### 16.7.0
```
--------------------
Features
^^^^^^^^
- ``activate.ps1`` syntax and style updated to follow ``PSStyleAnalyzer`` rules (`1371 <https://github.com/pypa/virtualenv/issues/1371>`_)
- Allow creating virtual environments for ``3.xy``. (`1385 <https://github.com/pypa/virtualenv/issues/1385>`_)
- Report error when running activate scripts directly, instead of sourcing. By reporting an error instead of running silently, the user get immediate feedback that the script was not used correctly. Only Bash and PowerShell are supported for now. (`1388 <https://github.com/pypa/virtualenv/issues/1388>`_)
- * add pip 19.2 (19.1.1 is kept to still support python 3.4 dropped by latest pip) (`1389 <https://github.com/pypa/virtualenv/issues/1389>`_)
```
### 16.6.2
```
--------------------
Bugfixes
^^^^^^^^
- Extend the LICENSE search paths list by ``lib64/pythonX.Y`` to support Linux
vendors who install their Python to ``/usr/lib64/pythonX.Y`` (Gentoo, Fedora,
openSUSE, RHEL and others) - by ``hroncok`` (`1382 <https://github.com/pypa/virtualenv/issues/1382>`_)
```
Links
- PyPI: https://pypi.org/project/virtualenv
- Changelog: https://pyup.io/changelogs/virtualenv/
- Homepage: https://virtualenv.pypa.io/
Update django from 2.2.2 to 2.2.4.
Changelog
### 2.2.4 ``` ========================== *August 1, 2019* Django 2.2.4 fixes security issues and several bugs in 2.2.3. CVE-2019-14232: Denial-of-service possibility in ``django.utils.text.Truncator`` ================================================================================ If ``django.utils.text.Truncator``'s ``chars()`` and ``words()`` methods were passed the ``html=True`` argument, they were extremely slow to evaluate certain inputs due to a catastrophic backtracking vulnerability in a regular expression. The ``chars()`` and ``words()`` methods are used to implement the :tfilter:`truncatechars_html` and :tfilter:`truncatewords_html` template filters, which were thus vulnerable. The regular expressions used by ``Truncator`` have been simplified in order to avoid potential backtracking issues. As a consequence, trailing punctuation may now at times be included in the truncated output. CVE-2019-14233: Denial-of-service possibility in ``strip_tags()`` ================================================================= Due to the behavior of the underlying ``HTMLParser``, :func:`django.utils.html.strip_tags` would be extremely slow to evaluate certain inputs containing large sequences of nested incomplete HTML entities. The ``strip_tags()`` method is used to implement the corresponding :tfilter:`striptags` template filter, which was thus also vulnerable. ``strip_tags()`` now avoids recursive calls to ``HTMLParser`` when progress removing tags, but necessarily incomplete HTML entities, stops being made. Remember that absolutely NO guarantee is provided about the results of ``strip_tags()`` being HTML safe. So NEVER mark safe the result of a ``strip_tags()`` call without escaping it first, for example with :func:`django.utils.html.escape`. CVE-2019-14234: SQL injection possibility in key and index lookups for ``JSONField``/``HStoreField`` ==================================================================================================== :lookup:`Key and index lookups <jsonfield.key>` for :class:`~django.contrib.postgres.fields.JSONField` and :lookup:`key lookups <hstorefield.key>` for :class:`~django.contrib.postgres.fields.HStoreField` were subject to SQL injection, using a suitably crafted dictionary, with dictionary expansion, as the ``**kwargs`` passed to ``QuerySet.filter()``. CVE-2019-14235: Potential memory exhaustion in ``django.utils.encoding.uri_to_iri()`` ===================================================================================== If passed certain inputs, :func:`django.utils.encoding.uri_to_iri` could lead to significant memory usage due to excessive recursion when re-percent-encoding invalid UTF-8 octet sequences. ``uri_to_iri()`` now avoids recursion when re-percent-encoding invalid UTF-8 octet sequences. Bugfixes ======== * Fixed a regression in Django 2.2 when ordering a ``QuerySet.union()``, ``intersection()``, or ``difference()`` by a field type present more than once results in the wrong ordering being used (:ticket:`30628`). * Fixed a migration crash on PostgreSQL when adding a check constraint with a ``contains`` lookup on :class:`~django.contrib.postgres.fields.DateRangeField` or :class:`~django.contrib.postgres.fields.DateTimeRangeField`, if the right hand side of an expression is the same type (:ticket:`30621`). * Fixed a regression in Django 2.2 where auto-reloader crashes if a file path contains nulls characters (``'\x00'``) (:ticket:`30506`). * Fixed a regression in Django 2.2 where auto-reloader crashes if a translation directory cannot be resolved (:ticket:`30647`). ========================== ``` ### 2.2.3 ``` ========================== *July 1, 2019* Django 2.2.3 fixes a security issue and several bugs in 2.2.2. Also, the latest string translations from Transifex are incorporated. CVE-2019-12781: Incorrect HTTP detection with reverse-proxy connecting via HTTPS -------------------------------------------------------------------------------- When deployed behind a reverse-proxy connecting to Django via HTTPS, :attr:`django.http.HttpRequest.scheme` would incorrectly detect client requests made via HTTP as using HTTPS. This entails incorrect results for :meth:`~django.http.HttpRequest.is_secure`, and :meth:`~django.http.HttpRequest.build_absolute_uri`, and that HTTP requests would not be redirected to HTTPS in accordance with :setting:`SECURE_SSL_REDIRECT`. ``HttpRequest.scheme`` now respects :setting:`SECURE_PROXY_SSL_HEADER`, if it is configured, and the appropriate header is set on the request, for both HTTP and HTTPS requests. If you deploy Django behind a reverse-proxy that forwards HTTP requests, and that connects to Django via HTTPS, be sure to verify that your application correctly handles code paths relying on ``scheme``, ``is_secure()``, ``build_absolute_uri()``, and ``SECURE_SSL_REDIRECT``. Bugfixes ======== * Fixed a regression in Django 2.2 where :class:`~django.db.models.Avg`, :class:`~django.db.models.StdDev`, and :class:`~django.db.models.Variance` crash with ``filter`` argument (:ticket:`30542`). * Fixed a regression in Django 2.2.2 where auto-reloader crashes with ``AttributeError``, e.g. when using ``ipdb`` (:ticket:`30588`). ========================== ```Links
- PyPI: https://pypi.org/project/django - Changelog: https://pyup.io/changelogs/django/ - Homepage: https://www.djangoproject.com/Update lxml from 4.3.4 to 4.4.0.
Changelog
### 4.4.0 ``` ================== Features added -------------- * ``Element.clear()`` accepts a new keyword argument ``keep_tail=True`` to clear everything but the tail text. This is helpful in some document-style use cases. * When creating attributes or namespaces from a dict in Python 3.6+, lxml now preserves the original insertion order of that dict, instead of always sorting the items by name. A similar change was made for ElementTree in CPython 3.8. See https://bugs.python.org/issue34160 * Integer elements in ``lxml.objectify`` implement the ``__index__()`` special method. * GH269: Read-only elements in XSLT were missing the ``nsmap`` property. Original patch by Jan Pazdziora. * ElementInclude can now restrict the maximum inclusion depth via a ``max_depth`` argument to prevent content explosion. It is limited to 6 by default. * The ``target`` object of the XMLParser can have ``start_ns()`` and ``end_ns()`` callback methods to listen to namespace declarations. * The ``TreeBuilder`` has new arguments ``comment_factory`` and ``pi_factory`` to pass factories for creating comments and processing instructions, as well as flag arguments ``insert_comments`` and ``insert_pis`` to discard them from the tree when set to false. * A `C14N 2.0 <https://www.w3.org/TR/xml-c14n2/>`_ implementation was added as ``etree.canonicalize()``, a corresponding ``C14NWriterTarget`` class, and a ``c14n2`` serialisation method. Bugs fixed ---------- * When writing to file paths that contain the URL escape character '%', the file path could wrongly be mangled by URL unescaping and thus write to a different file or directory. Code that writes to file paths that are provided by untrusted sources, but that must work with previous versions of lxml, should best either reject paths that contain '%' characters, or otherwise make sure that the path does not contain maliciously injected '%XX' URL hex escapes for paths like '../'. * Assigning to Element child slices with negative step could insert the slice at the wrong position, starting too far on the left. * Assigning to Element child slices with overly large step size could take very long, regardless of the length of the actual slice. * Assigning to Element child slices of the wrong size could sometimes fail to raise a ValueError (like a list assignment would) and instead assign outside of the original slice bounds or leave parts of it unreplaced. * The ``comment`` and ``pi`` events in ``iterwalk()`` were never triggered, and instead, comments and processing instructions in the tree were reported as ``start`` elements. Also, when walking an ElementTree (as opposed to its root element), comments and PIs outside of the root element are now reported. * LP1827833: The RelaxNG compact syntax support was broken with recent versions of ``rnc2rng``. * LP1758553: The HTML elements ``source`` and ``track`` were added to the list of empty tags in ``lxml.html.defs``. * Registering a prefix other than "xml" for the XML namespace is now rejected. * Failing to write XSLT output to a file could raise a misleading exception. It now raises ``IOError``. Other changes ------------- * Support for Python 3.4 was removed. * When using ``Element.find*()`` with prefix-namespace mappings, the empty string is now accepted to define a default namespace, in addition to the previously supported ``None`` prefix. Empty strings are more convenient since they keep all prefix keys in a namespace dict strings, which simplifies sorting etc. * The ``ElementTree.write_c14n()`` method has been deprecated in favour of the long preferred ``ElementTree.write(f, method="c14n")``. It will be removed in a future release. ``` ### 4.3.5 ``` ================== * Rebuilt with Cython 0.29.13 to support Python 3.8. ```Links
- PyPI: https://pypi.org/project/lxml - Changelog: https://pyup.io/changelogs/lxml/ - Homepage: http://lxml.de/Update numpy from 1.16.4 to 1.17.0.
Changelog
### 1.17.0 ``` ========================== This NumPy release contains a number of new features that should substantially improve its performance and usefulness, see Highlights below for a summary. The Python versions supported are 3.5-3.7, note that Python 2.7 has been dropped. Python 3.8b1 should work with the released source packages, but there are no future guarantees. Downstream developers should use Cython >= 0.29.10 for Python 3.8 support and OpenBLAS >= 3.7 (not currently out) to avoid problems on the Skylake architecture. The NumPy wheels on PyPI are built from the OpenBLAS development branch in order to avoid those problems. Highlights ========== * A new extensible random module along with four selectable random number generators and improved seeding designed for use in parallel processes has been added. The currently available bit generators are MT19937, PCG64, Philox, and SFC64. See below under New Features. * NumPy's FFT implementation was changed from fftpack to pocketfft, resulting in faster, more accurate transforms and better handling of datasets of prime length. See below under Improvements. * New radix sort and timsort sorting methods. It is currently not possible to choose which will be used, but they are hardwired to the datatype and used when either ``stable`` or ``mergesort`` is passed as the method. See below under Improvements. * Overriding numpy functions is now possible by default, see ``__array_function__`` below. New functions ============= * `numpy.errstate` is now also a function decorator Deprecations ============ ``np.polynomial`` functions warn when passed ``float`` in place of ``int`` -------------------------------------------------------------------------- Previously functions in this module would accept ``float`` values provided they were integral (``1.0``, ``2.0``, etc). For consistency with the rest of numpy, doing so is now deprecated, and in future will raise a ``TypeError``. Similarly, passing a float like ``0.5`` in place of an integer will now raise a ``TypeError`` instead of the previous ``ValueError``. Deprecate ``numpy.distutils.exec_command`` and ``numpy.distutils.temp_file_name`` --------------------------------------------------------------------------------- The internal use of these functions has been refactored and there are better alternatives. Relace ``exec_command`` with `subprocess.Popen` and ``temp_file_name`` with `tempfile.mkstemp`. Writeable flag of C-API wrapped arrays -------------------------------------- When an array is created from the C-API to wrap a pointer to data, the only indication we have of the read-write nature of the data is the ``writeable`` flag set during creation. It is dangerous to force the flag to writeable. In the future it will not be possible to switch the writeable flag to ``True`` from python. This deprecation should not affect many users since arrays created in such a manner are very rare in practice and only available through the NumPy C-API. `numpy.nonzero` should no longer be called on 0d arrays ------------------------------------------------------- The behavior of nonzero on 0d arrays was surprising, making uses of it almost always incorrect. If the old behavior was intended, it can be preserved without a warning by using ``nonzero(atleast_1d(arr))`` instead of ``nonzero(arr)``. In a future release, it is most likely this will raise a `ValueError`. Writing to the result of `numpy.broadcast_arrays` will warn ----------------------------------------------------------- Commonly `numpy.broadcast_arrays` returns a writeable array with internal overlap, making it unsafe to write to. A future version will set the ``writeable`` flag to ``False``, and require users to manually set it to ``True`` if they are sure that is what they want to do. Now writing to it will emit a deprecation warning with instructions to set the ``writeable`` flag ``True``. Note that if one were to inspect the flag before setting it, one would find it would already be ``True``. Explicitly setting it, though, as one will need to do in future versions, clears an internal flag that is used to produce the deprecation warning. To help alleviate confusion, an additional `FutureWarning` will be emitted when accessing the ``writeable`` flag state to clarify the contradiction. Future Changes ============== Shape-1 fields in dtypes won't be collapsed to scalars in a future version -------------------------------------------------------------------------- Currently, a field specified as ``[(name, dtype, 1)]`` or ``"1type"`` is interpreted as a scalar field (i.e., the same as ``[(name, dtype)]`` or ``[(name, dtype, ()]``). This now raises a FutureWarning; in a future version, it will be interpreted as a shape-(1,) field, i.e. the same as ``[(name, dtype, (1,))]`` or ``"(1,)type"`` (consistently with ``[(name, dtype, n)]`` / ``"ntype"`` with ``n>1``, which is already equivalent to ``[(name, dtype, (n,)]`` / ``"(n,)type"``). Compatibility notes =================== float16 subnormal rounding -------------------------- Casting from a different floating point precision to float16 used incorrect rounding in some edge cases. This means in rare cases, subnormal results will now be rounded up instead of down, changing the last bit (ULP) of the result. Signed zero when using divmod ----------------------------- Starting in version 1.12.0, numpy incorrectly returned a negatively signed zero when using the ``divmod`` and ``floor_divide`` functions when the result was zero. For example:: >>> np.zeros(10)//1 array([-0., -0., -0., -0., -0., -0., -0., -0., -0., -0.]) With this release, the result is correctly returned as a positively signed zero:: >>> np.zeros(10)//1 array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) ``MaskedArray.mask`` now returns a view of the mask, not the mask itself ------------------------------------------------------------------------ Returning the mask itself was unsafe, as it could be reshaped in place which would violate expectations of the masked array code. It's behavior is now consistent with the ``.data`` attribute, which also returns a view. The underlying mask can still be accessed with ``._mask`` if it is needed. Tests that contain ``assert x.mask is not y.mask`` or similar will need to be updated. Do not lookup ``__buffer__`` attribute in `numpy.frombuffer` ------------------------------------------------------------ Looking up ``__buffer__`` attribute in `numpy.frombuffer` was undocumented and non-functional. This code was removed. If needed, use ``frombuffer(memoryview(obj), ...)`` instead. ``out``is buffered for memory overlaps in ``np.take``, ``np.choose``, ``np.put`` -------------------------------------------------------------------------------- If the out argument to these functions is provided and has memory overlap with the other arguments, it is now buffered to avoid order-dependent behavior. Unpickling while loading requires explicit opt-in ------------------------------------------------- The functions ``np.load``, and ``np.lib.format.read_array`` take an ``allow_pickle`` keyword which now defaults to ``False`` in response to `CVE-2019-6446 <https://nvd.nist.gov/vuln/detail/CVE-2019-6446>`_. Potential changes to the random stream in old random module ----------------------------------------------------------- Due to bugs in the application of log to random floating point numbers, the stream may change when sampling from ``np.random.beta``, ``np.random.binomial``, ``np.random.laplace``, ``np.random.logistic``, ``np.random.logseries`` or ``np.random.multinomial`` if a 0 is generated in the underlying MT19937 random stream. There is a 1 in :math:`10^{53}` chance of this occurring, and so the probability that the stream changes for any given seed is extremely small. If a 0 is encountered in the underlying generator, then the incorrect value produced (either ``np.inf`` or ``np.nan``) is now dropped. ``i0`` now always returns a result with the same shape as the input ------------------------------------------------------------------- Previously, the output was squeezed, such that, e.g., input with just a single element would lead to an array scalar being returned, and inputs with shapes such as ``(10, 1)`` would yield results that would not broadcast against the input. Note that we generally recommend the SciPy implementation over the numpy one: it is a proper ufunc written in C, and more than an order of magnitude faster. ``np.can_cast`` no longer assumes all unsafe casting is allowed --------------------------------------------------------------- Previously, ``can_cast`` returned `True` for almost all inputs for ``casting='unsafe'``, even for cases where casting was not possible, such as from a structured dtype to a regular one. This has been fixed, making it more consistent with actual casting using, e.g., the ``.astype`` method. ``arr.writeable`` can be switched to true slightly more often ------------------------------------------------------------- In rare cases, it was not possible to switch an array from not writeable to writeable, although a base array is writeable. This can happen if an intermediate ``arr.base`` object is writeable. Previously, only the deepest base object was considered for this decision. However, in rare cases this object does not have the necessary information. In that case switching to writeable was never allowed. This has now been fixed. C API changes ============= dimension or stride input arguments are now passed by ``npy_intp const*`` ------------------------------------------------------------------------- Previously these function arguments were declared as the more strict ``npy_intp*``, which prevented the caller passing constant data. This change is backwards compatible, but now allows code like:: npy_intp const fixed_dims[] = {1, 2, 3}; // no longer complains that the const-qualifier is discarded npy_intp size = PyArray_MultiplyList(fixed_dims, 3); New Features ============ New extensible random module with selectable random number generators --------------------------------------------------------------------- A new extensible random module along with four selectable random number generators and improved seeding designed for use in parallel processes has been added. The currently available bit generators are MT19937, PCG64, Philox, and SFC64. PCG64 is the new default while MT19937 is retained for backwards compatibility. Note that the legacy random module is unchanged and is now frozen, your current results will not change. Extensive documentation for the new module is available online at `NumPy devdocs <http://www.numpy.org/devdocs/reference/random/index.html>`_. libFLAME -------- Support for building NumPy with the libFLAME linear algebra package as the LAPACK, implementation, see `libFLAME <https://www.cs.utexas.edu/~flame/web/libFLAME.html>`_ for details. User-defined BLAS detection order --------------------------------- ``numpy.distutils`` now uses an environment variable, comma-separated and case insensitive, to determine the detection order for BLAS libraries. By default ``NPY_BLAS_ORDER=mkl,blis,openblas,atlas,accelerate,blas``. However, to force the use of OpenBLAS simply do:: NPY_BLAS_ORDER=openblas python setup.py build which forces the use of OpenBLAS. This may be helpful for users which have a MKL installation but wishes to try out different implementations. User-defined LAPACK detection order ----------------------------------- ``numpy.distutils`` now uses an environment variable, comma-separated and case insensitive, to determine the detection order for LAPACK libraries. By default ``NPY_BLAS_ORDER=mkl,openblas,flame,atlas,accelerate,lapack``. However, to force the use of OpenBLAS simply do:: NPY_LAPACK_ORDER=openblas python setup.py build which forces the use of OpenBLAS. This may be helpful for users which have a MKL installation but wishes to try out different implementations. ``np.ufunc.reduce`` and related functions now accept a ``where`` mask --------------------------------------------------------------------- ``np.ufunc.reduce``, ``np.sum``, ``np.prod``, ``np.min``, ``np.max`` all now accept a ``where`` keyword argument, which can be used to tell which elements to include in the reduction. For reductions that do not have an identity, it is necessary to also pass in an initial value (e.g., ``initial=np.inf`` for ``np.min``). For instance, the equivalent of ``nansum`` would be, ``np.sum(a, where=~np.isnan(a))``. Timsort and radix sort have replaced mergesort for stable sorting ----------------------------------------------------------------- Both radix sort and timsort have been implemented and are now used in place of mergesort. Due to the need to maintain backward compatibility, the sorting ``kind`` options ``"stable"`` and ``"mergesort"`` have been made aliases of each other with the actual sort implementation depending on the array type. Radix sort is used for small integer types of 16 bits or less and timsort for the remaining types. Timsort features improved performace on data containing already or nearly sorted data and performs like mergesort on random data and requires O(n/2) working space. Details of the timsort algorithm can be found at `CPython listsort.txt <https://github.com/python/cpython/blob/3.7/Objects/listsort.txt>`_. ``np.unpackbits`` now accepts a ``count`` parameter --------------------------------------------------- ``count`` allows subsetting the number of bits that will be unpacked up-front, rather than reshaping and subsetting later, making the ``packbits`` operation invertible, and the unpacking less wasteful. Counts larger than the number of available bits add zero padding. Negative counts trim bits off the end instead of counting from the beginning. None counts implement the existing behavior of unpacking everything. ``np.linalg.svd`` and ``np.linalg.pinv`` can be faster on hermitian inputs -------------------------------------------------------------------------- These functions now accept a ``hermitian`` argument, matching the one added to ``np.linalg.matrix_rank`` in 1.14.0. divmod operation is now supported for two ``timedelta64`` operands ------------------------------------------------------------------ The divmod operator now handles two ``np.timedelta64`` operands, with type signature mm->qm. ``np.fromfile`` now takes an ``offset`` argument ------------------------------------------------ This function now takes an ``offset`` keyword argument for binary files, which specifics the offset (in bytes) from the file's current position. Defaults to 0. New mode "empty" for ``np.pad`` ------------------------------- This mode pads an array to a desired shape without initializing the new entries. ``np.empty_like`` and related functions now accept a ``shape`` argument ----------------------------------------------------------------------- ``np.empty_like``, ``np.full_like``, ``np.ones_like`` and ``np.zeros_like`` now accept a ``shape`` keyword argument, which can be used to create a new array as the prototype, overriding its shape as well. This is particularly useful when combined with the ``__array_function__`` protocol, allowing the creation of new arbitrary-shape arrays from NumPy-like libraries when such an array is used as the prototype. Floating point scalars implement ``as_integer_ratio`` to match the builtin float -------------------------------------------------------------------------------- This returns a (numerator, denominator) pair, which can be used to construct a `fractions.Fraction`. Structured ``dtype`` objects can be indexed with multiple fields names ---------------------------------------------------------------------- ``arr.dtype[['a', 'b']]`` now returns a dtype that is equivalent to ``arr[['a', 'b']].dtype``, for consistency with ``arr.dtype['a'] == arr['a'].dtype``. Like the dtype of structured arrays indexed with a list of fields, this dtype has the same ``itemsize`` as the original, but only keeps a subset of the fields. This means that ``arr[['a', 'b']]`` and ``arr.view(arr.dtype[['a', 'b']])`` are equivalent. ``.npy`` files support unicode field names ------------------------------------------ A new format version of 3.0 has been introduced, which enables structured types with non-latin1 field names. This is used automatically when needed. `numpy.packbits` and `numpy.unpackbits` accept an ``order`` keyword ------------------------------------------------------------------- The ``order`` keyword defaults to ``big``, and will order the **bits** accordingly. For ``'big'`` 3 will become ``[0, 0, 0, 0, 0, 0, 1, 1]``, and ``[1, 1, 0, 0, 0, 0, 0, 0]`` for ``little`` Improvements ============ Array comparison assertions include maximum differences ------------------------------------------------------- Error messages from array comparison tests such as `np.testing.assert_allclose` now include "max absolute difference" and "max relative difference," in addition to the previous "mismatch" percentage. This information makes it easier to update absolute and relative error tolerances. Replacement of the fftpack based FFT module by the pocketfft library -------------------------------------------------------------------- Both implementations have the same ancestor (Fortran77 FFTPACK by Paul N. Swarztrauber), but pocketfft contains additional modifications which improve both accuracy and performance in some circumstances. For FFT lengths containing large prime factors, pocketfft uses Bluestein's algorithm, which maintains ``O(N log N)`` run time complexity instead of deteriorating towards ``O(N*N)`` for prime lengths. Also, accuracy for real valued FFTs with near prime lengths has improved and is on par with complex valued FFTs. Further improvements to ``ctypes`` support in `numpy.ctypeslib` --------------------------------------------------------------- A new `numpy.ctypeslib.as_ctypes_type` function has been added, which can be used to converts a ``dtype`` into a best-guess ``ctypes`` type. Thanks to this new function, `numpy.ctypeslib.as_ctypes` now supports a much wider range of array types, including structures, booleans, and integers of non-native endianness. `numpy.errstate` is now also a function decorator ------------------------------------------------- Currently, if you have a function like:: def foo(): pass and you want to wrap the whole thing in ``errstate``, you have to rewrite it like so:: def foo(): with np.errstate(...): pass but with this change, you can do:: np.errstate(...) def foo(): pass thereby saving a level of indentation `numpy.exp` and `numpy.log` speed up for float32 implementation --------------------------------------------------------------- float32 implementation of numpy.exp and numpy.log now benefit from AVX2/AVX512 instruction set which are detected during runtime. numpy.exp has a max ulp error of 2.52 and numpy.log has a max ulp error or 3.83. Improve performance of `numpy.pad` ---------------------------------- The performance of the function has been improved for most cases by filling in a preallocated array with the desired padded shape instead of using concatenation. `numpy.interp` handles infinities more robustly ----------------------------------------------- In some cases where ``np.interp`` would previously return ``np.nan``, it now returns an appropriate infinity. Pathlib support for ``np.fromfile``, ``ndarray.tofile`` and ``ndarray.dump`` ---------------------------------------------------------------------------- ``np.fromfile``, ``np.ndarray.tofile`` and ``np.ndarray.dump`` now support the `pathlib.Path` type for the ``file``/``fid`` parameter. Specialized ``np.isnan``, ``np.isinf``, and ``np.isfinite`` ufuncs for bool and int types ----------------------------------------------------------------------------------------- The boolean and integer types are incapable of storing ``np.nan`` and ``np.inf`` values, which allows us to provide specialized ufuncs that are up to 250x faster than the current approach. ``np.isfinite`` supports ``datetime64`` and ``timedelta64`` types ----------------------------------------------------------------- Previously, `np.isfinite` used to raise a ``TypeError`` on being used on these two types. New keywords added to ``np.nan_to_num`` --------------------------------------- ``np.nan_to_num`` now accepts keywords ``nan``, ``posinf`` and ``neginf`` allowing the user to define the value to replace the ``nan``, positive and negative ``np.inf`` values respectively. MemoryErrors caused by allocated overly large arrays are more descriptive ------------------------------------------------------------------------- Often the cause of a MemoryError is incorrect broadcasting, which results in a very large and incorrect shape. The message of the error now includes this shape to help diagnose the cause of failure. `floor`, `ceil`, and `trunc` now respect builtin magic methods -------------------------------------------------------------- These ufuncs now call the ``__floor__``, ``__ceil__``, and ``__trunc__`` methods when called on object arrays, making them compatible with `decimal.Decimal` and `fractions.Fraction` objects. ``quantile`` now works on ``fraction.Fraction`` and ``decimal.Decimal`` objects ------------------------------------------------------------------------------- In general, this handles object arrays more gracefully, and avoids floating- point operations if exact arithmetic types are used. Support of object arrays in ``np.matmul`` ----------------------------------------- It is now possible to use ``np.matmul`` (or the ` operator) with object arrays. For instance, it is now possible to do:: from fractions import Fraction a = np.array([[Fraction(1, 2), Fraction(1, 3)], [Fraction(1, 3), Fraction(1, 2)]]) b = a a Changes ======= ``median`` and ``percentile`` family of functions no longer warn about ``nan`` ------------------------------------------------------------------------------ `numpy.median`, `numpy.percentile`, and `numpy.quantile` used to emit a ``RuntimeWarning`` when encountering an `numpy.nan`. Since they return the ``nan`` value, the warning is redundant and has been removed. ``timedelta64 % 0`` behavior adjusted to return ``NaT`` ------------------------------------------------------- The modulus operation with two ``np.timedelta64`` operands now returns ``NaT`` in the case of division by zero, rather than returning zero NumPy functions now always support overrides with ``__array_function__`` ------------------------------------------------------------------------ NumPy now always checks the ``__array_function__`` method to implement overrides of NumPy functions on non-NumPy arrays, as described in `NEP 18`_. The feature was available for testing with NumPy 1.16 if appropriate environment variables are set, but is now always enabled. .. _`NEP 18` : http://www.numpy.org/neps/nep-0018-array-function-protocol.html `numpy.lib.recfunctions.structured_to_unstructured` does not squeeze single-field views --------------------------------------------------------------------------------------- Previously ``structured_to_unstructured(arr[['a']])`` would produce a squeezed result inconsistent with ``structured_to_unstructured(arr[['a', b']])``. This was accidental. The old behavior can be retained with ``structured_to_unstructured(arr[['a']]).squeeze(axis=-1)`` or far more simply, ``arr['a']``. ``clip`` now uses a ufunc under the hood ---------------------------------------- This means that registering clip functions for custom dtypes in C via ``descr->f->fastclip`` is deprecated - they should use the ufunc registration mechanism instead, attaching to the ``np.core.umath.clip`` ufunc. It also means that ``clip`` accepts ``where`` and ``casting`` arguments, and can be override with ``__array_ufunc__``. A consequence of this change is that some behaviors of the old ``clip`` have been deprecated: * Passing ``nan`` to mean "do not clip" as one or both bounds. This didn't work in all cases anyway, and can be better handled by passing infinities of the appropriate sign. * Using "unsafe" casting by default when an ``out`` argument is passed. Using ``casting="unsafe"`` explicitly will silence this warning. Additionally, there are some corner cases with behavior changes: * Padding ``max < min`` has changed to be more consistent across dtypes, but should not be relied upon. * Scalar ``min`` and ``max`` take part in promotion rules like they do in all other ufuncs. ``__array_interface__`` offset now works as documented ------------------------------------------------------ The interface may use an ``offset`` value that was mistakenly ignored. Pickle protocol in ``np.savez`` set to 3 for ``force zip64`` flag ----------------------------------------------------------------- ``np.savez`` was not using the ``force_zip64`` flag, which limited the size of the archive to 2GB. But using the flag requires us to use pickle protocol 3 to write ``object`` arrays. The protocol used was bumped to 3, meaning the archive will be unreadable by Python2. Structured arrays indexed with non-existent fields raise ``KeyError`` not ``ValueError`` ---------------------------------------------------------------------------------------- ``arr['bad_field']`` on a structured type raises ``KeyError``, for consistency with ``dict['bad_field']``. .. _`NEP 18` : http://www.numpy.org/neps/nep-0018-array-function-protocol.html ========================= ```Links
- PyPI: https://pypi.org/project/numpy - Changelog: https://pyup.io/changelogs/numpy/ - Homepage: https://www.numpy.orgUpdate pandas from 0.24.2 to 0.25.0.
The bot wasn't able to find a changelog for this release. Got an idea?
Links
- PyPI: https://pypi.org/project/pandas - Homepage: http://pandas.pydata.orgUpdate pytz from 2019.1 to 2019.2.
The bot wasn't able to find a changelog for this release. Got an idea?
Links
- PyPI: https://pypi.org/project/pytz - Homepage: http://pythonhosted.org/pytz - Docs: https://pythonhosted.org/pytz/Update tabula-py from 1.3.1 to 1.4.1.
Changelog
### 1.4.0 ``` This release change `guess` option behavior because of tabula-java fix. Choose `guess` option if you don't set `area` option. Now `stream` and `lattice` option can be used along with `guess` option together. Also, drop Python 2.7, 3.4 support. See 167 for more details. See Google Colab notebook for example. - Bump tabula-java 1.0.3 167 - Drop Python 2.7, 3.4 167 - Add User-agent handling 151 - Introduce JAR path environment variable 136 - Enforce silent option 132 ```Links
- PyPI: https://pypi.org/project/tabula-py - Changelog: https://pyup.io/changelogs/tabula-py/ - Repo: https://github.com/chezou/tabula-pyUpdate bandit from 1.6.1 to 1.6.2.
Changelog
### 1.6.2 ``` <details open> <summary><strong>Changelog</strong></summary> * Performance fix (502) tylerwince </details> [See full changelog](https://github.com/PyCQA/bandit/compare/1.6.1...1.6.2) ```Links
- PyPI: https://pypi.org/project/bandit - Changelog: https://pyup.io/changelogs/bandit/ - Docs: https://bandit.readthedocs.io/en/latest/Update coverage from 4.5.3 to 4.5.4.
Changelog
### 4.5.4 ``` --------------------------- - Multiprocessing support in Python 3.8 was broken, but is now fixed. Closes `issue 828`_. .. _issue 828: https://github.com/nedbat/coveragepy/issues/828 .. _changes_453: ```Links
- PyPI: https://pypi.org/project/coverage - Changelog: https://pyup.io/changelogs/coverage/ - Repo: https://github.com/nedbat/coveragepyUpdate coveralls from 1.8.1 to 1.8.2.
Changelog
### 1.8.2 ``` Internal * **dependencies**: update pass urllib3<1.25 pin, now that that's fixed. <a name="1.8.1"></a> ```Links
- PyPI: https://pypi.org/project/coveralls - Changelog: https://pyup.io/changelogs/coveralls/ - Repo: http://github.com/coveralls-clients/coveralls-pythonUpdate flake8 from 3.7.7 to 3.7.8.
Changelog
### 3.7.8 ``` ------------------- You can view the `3.7.8 milestone`_ on GitLab for more details. Bugs Fixed ~~~~~~~~~~ - Fix handling of ``Application.parse_preliminary_options_and_args`` when argv is an empty list (See also `GitLab!310`_, `GitLab518`_) - Fix crash when a file parses but fails to tokenize (See also `GitLab!314`_, `GitLab532`_) - Log the full traceback on plugin exceptions (See also `GitLab!317`_) - Fix `` noqa: ...`` comments with multi-letter codes (See also `GitLab!326`_, `GitLab549`_) .. all links .. _3.7.8 milestone: https://gitlab.com/pycqa/flake8/milestones/31 .. issue links .. _GitLab518: https://gitlab.com/pycqa/flake8/issues/518 .. _GitLab532: https://gitlab.com/pycqa/flake8/issues/532 .. _GitLab549: https://gitlab.com/pycqa/flake8/issues/549 .. merge request links .. _GitLab!310: https://gitlab.com/pycqa/flake8/merge_requests/310 .. _GitLab!314: https://gitlab.com/pycqa/flake8/merge_requests/314 .. _GitLab!317: https://gitlab.com/pycqa/flake8/merge_requests/317 .. _GitLab!326: https://gitlab.com/pycqa/flake8/merge_requests/326 ```Links
- PyPI: https://pypi.org/project/flake8 - Changelog: https://pyup.io/changelogs/flake8/ - Repo: https://gitlab.com/pycqa/flake8Update gitpython from 2.1.11 to 2.1.13.
Changelog
### 2.1.12 ``` ============================== * Multi-value support and interface improvements for Git configuration. Thanks to A. Jesse Jiryu Davis. see the following for (most) details: https://github.com/gitpython-developers/gitpython/milestone/27?closed=1 or run have a look at the difference between tags v2.1.11 and v2.1.12: https://github.com/gitpython-developers/GitPython/compare/2.1.11...2.1.12 ```Links
- PyPI: https://pypi.org/project/gitpython - Changelog: https://pyup.io/changelogs/gitpython/ - Repo: https://github.com/gitpython-developers/GitPython - Docs: https://pythonhosted.org/GitPython/Update isort from 4.3.20 to 4.3.21.
Changelog
### 4.3.21 ``` - Fixed issue 957 - Long aliases and use_parentheses generates invalid syntax ```Links
- PyPI: https://pypi.org/project/isort - Changelog: https://pyup.io/changelogs/isort/ - Repo: https://github.com/timothycrosley/isortUpdate jedi from 0.13.3 to 0.14.1.
Changelog
### 0.14.1 ``` +++++++++++++++++++ - CallSignature.index should now be working a lot better - A couple of smaller bugfixes ``` ### 0.14.0 ``` +++++++++++++++++++ - Added ``goto_*(prefer_stubs=True)`` as well as ``goto_*(prefer_stubs=True)`` - Stubs are used now for type inference - Typeshed is used for better type inference - Reworked Definition.full_name, should have more correct return values ``` ### 0.13.4 ``` ==================== Changes ------- * fix duplication in function parameters completion (267) ```Links
- PyPI: https://pypi.org/project/jedi - Changelog: https://pyup.io/changelogs/jedi/ - Repo: https://github.com/davidhalter/jediUpdate mypy from 0.701 to 0.720.
The bot wasn't able to find a changelog for this release. Got an idea?
Links
- PyPI: https://pypi.org/project/mypy - Homepage: http://www.mypy-lang.org/Update packaging from 19.0 to 19.1.
The bot wasn't able to find a changelog for this release. Got an idea?
Links
- PyPI: https://pypi.org/project/packaging - Changelog: https://pyup.io/changelogs/packaging/ - Repo: https://github.com/pypa/packagingUpdate parso from 0.4.0 to 0.5.1.
Changelog
### 0.5.1 ``` ++++++++++++++++++ - Fix: Some unicode identifiers were not correctly tokenized - Fix: Line continuations in f-strings are now working ``` ### 0.5.0 ``` ++++++++++++++++++ - **Breaking Change** comp_for is now called sync_comp_for for all Python versions to be compatible with the Python 3.8 Grammar - Added .pyi stubs for a lot of the parso API - Small FileIO changes ```Links
- PyPI: https://pypi.org/project/parso - Changelog: https://pyup.io/changelogs/parso/ - Repo: https://github.com/davidhalter/parsoUpdate pbr from 5.3.0 to 5.4.2.
The bot wasn't able to find a changelog for this release. Got an idea?
Links
- PyPI: https://pypi.org/project/pbr - Homepage: https://docs.openstack.org/pbr/latest/Update pip-licenses from 1.15.0 to 1.15.2.
Changelog
### 1.15.1 ``` * Skip parsing of license file for packages specified with `--ignore-packages` option ```Links
- PyPI: https://pypi.org/project/pip-licenses - Changelog: https://pyup.io/changelogs/pip-licenses/ - Repo: https://github.com/raimon49/pip-licensesUpdate pydocstyle from 3.0.0 to 4.0.0.
The bot wasn't able to find a changelog for this release. Got an idea?
Links
- PyPI: https://pypi.org/project/pydocstyle - Changelog: https://pyup.io/changelogs/pydocstyle/ - Repo: https://github.com/PyCQA/pydocstyle/Update pylint-django from 2.0.9 to 2.0.11.
Changelog
### 2.0.11 ``` ----------------------------- - Use ``functools.wrap`` to preserve ``leave_module`` info (Mohit Solanki) ``` ### 2.0.10 ``` ----------------------------------------------- - Suppress ``no-member`` for ``ManyToManyField``. Fix `192 <https://github.com/PyCQA/pylint-django/issues/192>`_ and `237 <https://github.com/PyCQA/pylint-django/issues/237>`_ (Pierre Chiquet) - Fix ``UnboundLocalError`` with ``ForeignKey(to=)``. Fix `232 <https://github.com/PyCQA/pylint-django/issues/232>`_ (Sardorbek Imomaliev) ```Links
- PyPI: https://pypi.org/project/pylint-django - Changelog: https://pyup.io/changelogs/pylint-django/ - Repo: https://github.com/PyCQA/pylint-djangoUpdate pyparsing from 2.4.0 to 2.4.2.
Changelog
### 2.4.2 ``` - API change adding support for `expr[...]` - the original code in 2.4.1 incorrectly implemented this as OneOrMore. Code using this feature under this relase should explicitly use `expr[0, ...]` for ZeroOrMore and `expr[1, ...]` for OneOrMore. In 2.4.2 you will be able to write `expr[...]` equivalent to `ZeroOrMore(expr)`. - Bug if composing And, Or, MatchFirst, or Each expressions using an expression. This only affects code which uses explicit expression construction using the And, Or, etc. classes instead of using overloaded operators '+', '^', and so on. If constructing an And using a single expression, you may get an error that "cannot multiply ParserElement by 0 or (0, 0)" or a Python `IndexError`. Change code like cmd = Or(Word(alphas)) to cmd = Or([Word(alphas)]) (Note that this is not the recommended style for constructing Or expressions.) - Some newly-added `__diag__` switches are enabled by default, which may give rise to noisy user warnings for existing parsers. You can disable them using: import pyparsing as pp pp.__diag__.warn_multiple_tokens_in_named_alternation = False pp.__diag__.warn_ungrouped_named_tokens_in_collection = False pp.__diag__.warn_name_set_on_empty_Forward = False pp.__diag__.warn_on_multiple_string_args_to_oneof = False pp.__diag__.enable_debug_on_named_expressions = False In 2.4.2 these will all be set to False by default. ``` ### 2.4.2a1 ``` ---------------------------- It turns out I got the meaning of `[...]` absolutely backwards, so I've deleted 2.4.1 and am repushing this release as 2.4.2a1 for people to give it a try before I can call it ready to go. The `expr[...]` notation was pushed out to be synonymous with `OneOrMore(expr)`, but this is really counter to most Python notations (and even other internal pyparsing notations as well). It should have been defined to be equivalent to ZeroOrMore(expr). - Changed [...] to emit ZeroOrMore instead of OneOrMore. - Removed code that treats ParserElements like iterables. - Change all __diag__ switches to False. ``` ### 2.4.1.1 ``` ------------------------------- This is a re-release of version 2.4.1 to restore the release history in PyPI, since the 2.4.1 release was deleted. There are 3 known issues in this release, which are fixed in ``` ### 2.4.1 ``` -------------------------- - NOTE: Deprecated functions and features that will be dropped in pyparsing 2.5.0 (planned next release): . support for Python 2 - ongoing users running with Python 2 can continue to use pyparsing 2.4.1 . ParseResults.asXML() - if used for debugging, switch to using ParseResults.dump(); if used for data transfer, use ParseResults.asDict() to convert to a nested Python dict, which can then be converted to XML or JSON or other transfer format . operatorPrecedence synonym for infixNotation - convert to calling infixNotation . commaSeparatedList - convert to using pyparsing_common.comma_separated_list . upcaseTokens and downcaseTokens - convert to using pyparsing_common.upcaseTokens and downcaseTokens . __compat__.collect_all_And_tokens will not be settable to False to revert to pre-2.3.1 results name behavior - review use of names for MatchFirst and Or expressions containing And expressions, as they will return the complete list of parsed tokens, not just the first one. Use __diag__.warn_multiple_tokens_in_named_alternation (described below) to help identify those expressions in your parsers that will have changed as a result. - A new shorthand notation has been added for repetition expressions: expr[min, max], with '...' valid as a min or max value: - expr[...] is equivalent to OneOrMore(expr) - expr[0, ...] is equivalent to ZeroOrMore(expr) - expr[1, ...] is equivalent to OneOrMore(expr) - expr[n, ...] or expr[n,] is equivalent to expr*n + ZeroOrMore(expr) (read as "n or more instances of expr") - expr[..., n] is equivalent to expr*(0, n) - expr[m, n] is equivalent to expr*(m, n) Note that expr[..., n] and expr[m, n] do not raise an exception if more than n exprs exist in the input stream. If this behavior is desired, then write expr[..., n] + ~expr. - '...' can also be used as short hand for SkipTo when used in adding parse expressions to compose an And expression. Literal('start') + ... + Literal('end') And(['start', ..., 'end']) are both equivalent to: Literal('start') + SkipTo('end')("_skipped*") + Literal('end') The '...' form has the added benefit of not requiring repeating the skip target expression. Note that the skipped text is returned with '_skipped' as a results name, and that the contents of `_skipped` will contain a list of text from all `...`s in the expression. - '...' can also be used as a "skip forward in case of error" expression: expr = "start" + (Word(nums).setName("int") | ...) + "end" expr.parseString("start 456 end") ['start', '456', 'end'] expr.parseString("start 456 foo 789 end") ['start', '456', 'foo 789 ', 'end'] - _skipped: ['foo 789 '] expr.parseString("start foo end") ['start', 'foo ', 'end'] - _skipped: ['foo '] expr.parseString("start end") ['start', '', 'end'] - _skipped: ['missing <int>'] Note that in all the error cases, the '_skipped' results name is present, showing a list of the extra or missing items. This form is only valid when used with the '|' operator. - Improved exception messages to show what was actually found, not just what was expected. word = pp.Word(pp.alphas) pp.OneOrMore(word).parseString("aaa bbb 123", parseAll=True) Former exception message: pyparsing.ParseException: Expected end of text (at char 8), (line:1, col:9) New exception message: pyparsing.ParseException: Expected end of text, found '1' (at char 8), (line:1, col:9) - Added diagnostic switches to help detect and warn about common parser construction mistakes, or enable additional parse debugging. Switches are attached to the pyparsing.__diag__ namespace object: - warn_multiple_tokens_in_named_alternation - flag to enable warnings when a results name is defined on a MatchFirst or Or expression with one or more And subexpressions (default=True) - warn_ungrouped_named_tokens_in_collection - flag to enable warnings when a results name is defined on a containing expression with ungrouped subexpressions that also have results names (default=True) - warn_name_set_on_empty_Forward - flag to enable warnings whan a Forward is defined with a results name, but has no contents defined (default=False) - warn_on_multiple_string_args_to_oneof - flag to enable warnings whan oneOf is incorrectly called with multiple str arguments (default=True) - enable_debug_on_named_expressions - flag to auto-enable debug on all subsequent calls to ParserElement.setName() (default=False) warn_multiple_tokens_in_named_alternation is intended to help those who currently have set __compat__.collect_all_And_tokens to False as a workaround for using the pre-2.3.1 code with named MatchFirst or Or expressions containing an And expression. - Added ParseResults.from_dict classmethod, to simplify creation of a ParseResults with results names using a dict, which may be nested. This makes it easy to add a sub-level of named items to the parsed tokens in a parse action. - Added asKeyword argument (default=False) to oneOf, to force keyword-style matching on the generated expressions. - ParserElement.runTests now accepts an optional 'file' argument to redirect test output to a file-like object (such as a StringIO, or opened file). Default is to write to sys.stdout. - conditionAsParseAction is a helper method for constructing a parse action method from a predicate function that simply returns a boolean result. Useful for those places where a predicate cannot be added using addCondition, but must be converted to a parse action (such as in infixNotation). May be used as a decorator if default message and exception types can be used. See ParserElement.addCondition for more details about the expected signature and behavior for predicate condition methods. - While investigating issue 93, I found that Or and addCondition could interact to select an alternative that is not the longest match. This is because Or first checks all alternatives for matches without running attached parse actions or conditions, orders by longest match, and then rechecks for matches with conditions and parse actions. Some expressions, when checking with conditions, may end up matching on a shorter token list than originally matched, but would be selected because of its original priority. This matching code has been expanded to do more extensive searching for matches when a second-pass check matches a smaller list than in the first pass. - Fixed issue 87, a regression in indented block. Reported by Renz Bagaporo, who submitted a very nice repro example, which makes the bug-fixing process a lot easier, thanks! - Fixed MemoryError issue 85 and 91 with str generation for Forwards. Thanks decalage2 and Harmon758 for your patience. - Modified setParseAction to accept None as an argument, indicating that all previously-defined parse actions for the expression should be cleared. - Modified pyparsing_common.real and sci_real to parse reals without leading integer digits before the decimal point, consistent with Python real number formats. Original PR 98 submitted by ansobolev. - Modified runTests to call postParse function before dumping out the parsed results - allows for postParse to add further results, such as indications of additional validation success/failure. - Updated statemachine example: refactored state transitions to use overridden classmethods; added <statename>Mixin class to simplify definition of application classes that "own" the state object and delegate to it to model state-specific properties and behavior. - Added example nested_markup.py, showing a simple wiki markup with nested markup directives, and illustrating the use of '...' for skipping over input to match the next expression. (This example uses syntax that is not valid under Python 2.) - Rewrote delta_time.py example (renamed from deltaTime.py) to fix some omitted formats and upgrade to latest pyparsing idioms, beginning with writing an actual BNF. - With the help and encouragement from several contributors, including Matěj Cepl and Cengiz Kaygusuz, I've started cleaning up the internal coding styles in core pyparsing, bringing it up to modern coding practices from pyparsing's early development days dating back to 2003. Whitespace has been largely standardized along PEP8 guidelines, removing extra spaces around parentheses, and adding them around arithmetic operators and after colons and commas. I was going to hold off on doing this work until after 2.4.1, but after cleaning up a few trial classes, the difference was so significant that I continued on to the rest of the core code base. This should facilitate future work and submitted PRs, allowing them to focus on substantive code changes, and not get sidetracked by whitespace issues. ```Links
- PyPI: https://pypi.org/project/pyparsing - Changelog: https://pyup.io/changelogs/pyparsing/ - Repo: https://github.com/pyparsing/pyparsing/ - Docs: https://pythonhosted.org/pyparsing/Update pyyaml from 5.1.1 to 5.1.2.
The bot wasn't able to find a changelog for this release. Got an idea?
Links
- PyPI: https://pypi.org/project/pyyaml - Repo: https://github.com/yaml/pyyamlUpdate snowballstemmer from 1.2.1 to 1.9.0.
The bot wasn't able to find a changelog for this release. Got an idea?
Links
- PyPI: https://pypi.org/project/snowballstemmer - Repo: https://github.com/snowballstem/snowballUpdate typed-ast from 1.3.5 to 1.4.0.
The bot wasn't able to find a changelog for this release. Got an idea?
Links
- PyPI: https://pypi.org/project/typed-ast - Repo: https://github.com/python/typed_astUpdate virtualenv from 16.6.1 to 16.7.2.
Changelog
### 16.7.1 ``` -------------------- Features ^^^^^^^^ - pip bumped to 19.2.1 (`1392 <https://github.com/pypa/virtualenv/issues/1392>`_) ``` ### 16.7.0 ``` -------------------- Features ^^^^^^^^ - ``activate.ps1`` syntax and style updated to follow ``PSStyleAnalyzer`` rules (`1371 <https://github.com/pypa/virtualenv/issues/1371>`_) - Allow creating virtual environments for ``3.xy``. (`1385 <https://github.com/pypa/virtualenv/issues/1385>`_) - Report error when running activate scripts directly, instead of sourcing. By reporting an error instead of running silently, the user get immediate feedback that the script was not used correctly. Only Bash and PowerShell are supported for now. (`1388 <https://github.com/pypa/virtualenv/issues/1388>`_) - * add pip 19.2 (19.1.1 is kept to still support python 3.4 dropped by latest pip) (`1389 <https://github.com/pypa/virtualenv/issues/1389>`_) ``` ### 16.6.2 ``` -------------------- Bugfixes ^^^^^^^^ - Extend the LICENSE search paths list by ``lib64/pythonX.Y`` to support Linux vendors who install their Python to ``/usr/lib64/pythonX.Y`` (Gentoo, Fedora, openSUSE, RHEL and others) - by ``hroncok`` (`1382 <https://github.com/pypa/virtualenv/issues/1382>`_) ```Links
- PyPI: https://pypi.org/project/virtualenv - Changelog: https://pyup.io/changelogs/virtualenv/ - Homepage: https://virtualenv.pypa.io/