Closed ambv closed 3 years ago
One case that I believe refers to this bug: (non-relevant parts removed)
Using the workaround (running with --fast) worked for my case.
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
- if (
- mission.get("status")
- not in [
- "new",
- "test",
- "scheduled"
- ]
- and not mission.get("redeem")
- ):
+ if mission.get("status") not in [
+ "new",
+ "test",
+ "scheduled",
+ ] and not mission.get("redeem"):
--- first pass
+++ second pass
@@ -7373,15 +7373,19 @@
- if mission.get("status") not in [
- "new",
- "test",
- "scheduled",
- ] and not mission.get("redeem"):
+ if (
+ mission.get("status")
+ not in [
+ "new",
+ "test",
+ "scheduled",
+ ]
+ and not mission.get("redeem")
+ ):
A few more cases that seem to be related (running with Black 20.08b1
):
Possibly related, though in my codebase it all seems to be right around uses of a trailing # noqa
to make flake8 happy with long lines from before I was using black. The workaround did the trick to get me unstuck.
I encountered this bug when trying to remove trailing commas. When in fragment (formatted with Black)
assert (
xxxxxx(
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,
xxxxxxxxxxxxxxxxxxxxxxxxx,
)
== xxxxxx(xxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxx)
)
remove the trailing comma at line 4, it breaks Black:
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -1,7 +1,4 @@
-assert (
- xxxxxx(
- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,
- xxxxxxxxxxxxxxxxxxxxxxxxx
- )
- == xxxxxx(xxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxx)
-)
+assert xxxxxx(
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,
+ xxxxxxxxxxxxxxxxxxxxxxxxx,
+) == xxxxxx(xxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxx)
--- first pass
+++ second pass
@@ -1,4 +1,7 @@
-assert xxxxxx(
- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,
- xxxxxxxxxxxxxxxxxxxxxxxxx,
-) == xxxxxx(xxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxx)
+assert (
+ xxxxxx(
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,
+ xxxxxxxxxxxxxxxxxxxxxxxxx,
+ )
+ == xxxxxx(xxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxx)
+)
The workaround produces the result with the trailing comma which differs from the original:
assert xxxxxx(
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,
xxxxxxxxxxxxxxxxxxxxxxxxx,
) == xxxxxx(xxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxx)
Source:
@staticmethod
def _pd_dtype_to_es_dtype(pd_dtype):
"""
Mapping pandas dtypes to Elasticsearch dtype
--------------------------------------------
Pandas dtype Python type NumPy type Usage
object str string_, unicode_ Text
int64 int int_, int8, int16, int32, int64, uint8, uint16, uint32, uint64 Integer numbers
float64 float float_, float16, float32, float64 Floating point numbers
bool bool bool_ True/False values
datetime64 NA datetime64[ns] Date and time values
timedelta[ns] NA NA Differences between two datetimes
category NA NA Finite list of text values
```
"""
(lots of tabs in the docstring)
Log:
```diff
--- src
+++ dst
@@ -5280,11 +5280,11 @@
body=
Expr(
value=
Constant(
value=
- 'Mapping pandas dtypes to Elasticsearch dtype\n--------------------------------------------\n\n```\nPandas dtype\tPython type\tNumPy type\tUsage\nobject\tstr\tstring_, unicode_\tText\nint64\tint\tint_, int8, int16, int32, int64, uint8, uint16, uint32, uint64\tInteger numbers\nfloat64\tfloat\tfloat_, float16, float32, float64\tFloating point numbers\nbool\tbool\tbool_\tTrue/False values\ndatetime64\tNA\tdatetime64[ns]\tDate and time values\ntimedelta[ns]\tNA\tNA\tDifferences between two datetimes\ncategory\tNA\tNA\tFinite list of text values\n```', # str
+ 'Mapping pandas dtypes to Elasticsearch dtype\n--------------------------------------------\n\n```\nPandas dtype Python type NumPy type Usage\nobject str string_, unicode_ Text\nint64 int int_, int8, int16, int32, int64, uint8, uint16, uint32, uint64 Integer numbers\nfloat64 float float_, float16, float32, float64 Floating point numbers\nbool bool bool_ True/False values\ndatetime64 NA datetime64[ns] Date and time values\ntimedelta[ns] NA NA Differences between two datetimes\ncategory NA NA Finite list of text values\n```', # str
) # /Constant
) # /Expr
Assign(
targets=
Name(
@sethmlarson that is case of #1601. It isn't unstable formatting where there are differences between the first and second pass of Black, just that Black fails its own AST safety checks.
We (cc @kratsg @lukasheinrich) have one from pyhf
found in https://github.com/scikit-hep/pyhf/pull/1048 and produced by the following diff in the codebase
$ git diff
diff --git a/tests/test_tensor.py b/tests/test_tensor.py
index 8cbfdead..e6b3afc6 100644
--- a/tests/test_tensor.py
+++ b/tests/test_tensor.py
@@ -82,7 +82,7 @@ def test_complex_tensor_ops(backend):
tb.where(
tb.astensor([1, 0, 1], dtype="bool"),
tb.astensor([1, 1, 1]),
- tb.astensor([2, 2, 2]),
+ tb.astensor([2, 2, 2])
)
)
== [1, 2, 1]
Black error diff:
Mode(target_versions={<TargetVersion.PY36: 6>, <TargetVersion.PY38: 8>, <TargetVersion.PY37: 7>}, line_length=88, string_normalization=False, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -75,20 +75,17 @@
-1,
0,
1,
1,
]
- assert (
- tb.tolist(
- tb.where(
- tb.astensor([1, 0, 1], dtype="bool"),
- tb.astensor([1, 1, 1]),
- tb.astensor([2, 2, 2])
- )
- )
- == [1, 2, 1]
- )
+ assert tb.tolist(
+ tb.where(
+ tb.astensor([1, 0, 1], dtype="bool"),
+ tb.astensor([1, 1, 1]),
+ tb.astensor([2, 2, 2]),
+ )
+ ) == [1, 2, 1]
def test_ones(backend):
tb = pyhf.tensorlib
assert tb.tolist(tb.ones((2, 3))) == [[1, 1, 1], [1, 1, 1]]
--- first pass
+++ second pass
@@ -75,17 +75,20 @@
-1,
0,
1,
1,
]
- assert tb.tolist(
- tb.where(
- tb.astensor([1, 0, 1], dtype="bool"),
- tb.astensor([1, 1, 1]),
- tb.astensor([2, 2, 2]),
- )
- ) == [1, 2, 1]
+ assert (
+ tb.tolist(
+ tb.where(
+ tb.astensor([1, 0, 1], dtype="bool"),
+ tb.astensor([1, 1, 1]),
+ tb.astensor([2, 2, 2]),
+ )
+ )
+ == [1, 2, 1]
+ )
def test_ones(backend):
tb = pyhf.tensorlib
assert tb.tolist(tb.ones((2, 3))) == [[1, 1, 1], [1, 1, 1]]
diff in codebase
diff --git a/pandas/tests/scalar/timestamp/test_constructors.py b/pandas/tests/scalar/timestamp/test_constructors.py
index 316a299ba..c70eacdfa 100644
--- a/pandas/tests/scalar/timestamp/test_constructors.py
+++ b/pandas/tests/scalar/timestamp/test_constructors.py
@@ -267,7 +267,7 @@ class TestTimestampConstructors:
hour=1,
minute=2,
second=3,
- microsecond=999999,
+ microsecond=999999
)
) == repr(Timestamp("2015-11-12 01:02:03.999999"))
~
generated log
Mode(target_versions={<TargetVersion.PY37: 7>, <TargetVersion.PY38: 8>}, line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -257,22 +257,21 @@
assert repr(Timestamp(year=2015, month=11, day=12)) == repr(
Timestamp("20151112")
)
- assert(repr(
+ assert repr(
Timestamp(
year=2015,
month=11,
day=12,
hour=1,
minute=2,
second=3,
- microsecond=999999
+ microsecond=999999,
)
) == repr(Timestamp("2015-11-12 01:02:03.999999"))
- )
def test_constructor_fromordinal(self):
base = datetime(2000, 1, 1)
ts = Timestamp.fromordinal(base.toordinal(), freq="D")
--- first pass
+++ second pass
@@ -257,21 +257,24 @@
assert repr(Timestamp(year=2015, month=11, day=12)) == repr(
Timestamp("20151112")
)
- assert repr(
- Timestamp(
- year=2015,
- month=11,
- day=12,
- hour=1,
- minute=2,
- second=3,
- microsecond=999999,
+ assert (
+ repr(
+ Timestamp(
+ year=2015,
+ month=11,
+ day=12,
+ hour=1,
+ minute=2,
+ second=3,
+ microsecond=999999,
+ )
)
- ) == repr(Timestamp("2015-11-12 01:02:03.999999"))
+ == repr(Timestamp("2015-11-12 01:02:03.999999"))
+ )
def test_constructor_fromordinal(self):
base = datetime(2000, 1, 1)
ts = Timestamp.fromordinal(base.toordinal(), freq="D")
Another case that I think is related to this issue:
Source:
if any(
k in t
for k in ["AAAAAAAAAA", "AAAAA", "AAAAAA", "AAAAAAAAA", "AAA", "AAAAAA", "AAAAAAAA", "AAA", "AAAAA", "AAAAA", "AAAA"]
) and not any(k in t for k in ["AAA"]):
pass
Log:
Mode(target_versions={<TargetVersion.PY38: 8>}, line_length=120, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -1,5 +1,17 @@
if any(
k in t
- for k in ["AAAAAAAAAA", "AAAAA", "AAAAAA", "AAAAAAAAA", "AAA", "AAAAAA", "AAAAAAAA", "AAA", "AAAAA", "AAAAA", "AAAA"]
+ for k in [
+ "AAAAAAAAAA",
+ "AAAAA",
+ "AAAAAA",
+ "AAAAAAAAA",
+ "AAA",
+ "AAAAAA",
+ "AAAAAAAA",
+ "AAA",
+ "AAAAA",
+ "AAAAA",
+ "AAAA",
+ ]
) and not any(k in t for k in ["AAA"]):
pass
--- first pass
+++ second pass
@@ -1,17 +1,20 @@
-if any(
- k in t
- for k in [
- "AAAAAAAAAA",
- "AAAAA",
- "AAAAAA",
- "AAAAAAAAA",
- "AAA",
- "AAAAAA",
- "AAAAAAAA",
- "AAA",
- "AAAAA",
- "AAAAA",
- "AAAA",
- ]
-) and not any(k in t for k in ["AAA"]):
+if (
+ any(
+ k in t
+ for k in [
+ "AAAAAAAAAA",
+ "AAAAA",
+ "AAAAAA",
+ "AAAAAAAAA",
+ "AAA",
+ "AAAAAA",
+ "AAAAAAAA",
+ "AAA",
+ "AAAAA",
+ "AAAAA",
+ "AAAA",
+ ]
+ )
+ and not any(k in t for k in ["AAA"])
+):
pass
Original code:
Instability:
--- first pass
+++ second pass
@@ -1748,20 +1748,23 @@
False,
12,
timestamp,
["#fakeusers", "#fakemisc"],
)
- expected = "\r\n".join(
- [
- ":%(hostname)s 311 %(req)s %(targ)s target host.com * :Target User",
- ":%(hostname)s 312 %(req)s %(targ)s irc.host.com :A fake server",
- ":%(hostname)s 317 %(req)s %(targ)s 12 %(timestamp)s :seconds idle, signon time",
- ":%(hostname)s 319 %(req)s %(targ)s :#fakeusers #fakemisc",
- ":%(hostname)s 318 %(req)s %(targ)s :End of WHOIS list.",
- "",
- ]
- ) % dict(hostname=hostname, timestamp=timestamp, req=req, targ=targ)
+ expected = (
+ "\r\n".join(
+ [
+ ":%(hostname)s 311 %(req)s %(targ)s target host.com * :Target User",
+ ":%(hostname)s 312 %(req)s %(targ)s irc.host.com :A fake server",
+ ":%(hostname)s 317 %(req)s %(targ)s 12 %(timestamp)s :seconds idle, signon time",
+ ":%(hostname)s 319 %(req)s %(targ)s :#fakeusers #fakemisc",
+ ":%(hostname)s 318 %(req)s %(targ)s :End of WHOIS list.",
+ "",
+ ]
+ )
+ % dict(hostname=hostname, timestamp=timestamp, req=req, targ=targ)
+ )
self.check(expected)
I have another case if you like : blk_8l64bulm.log
Workaround using --fast
worked.
Another example...
Source:
aaaaaaaaaaaaaaaaaaaaaaaaaa = bbbbbbbbbbbbbbbbbbbb( # ccccccccccccccccccccccccccccccccccc
d=0
)
Log:
Mode(target_versions={<TargetVersion.PY37: 7>}, line_length=88, string_normalization=False, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -1,3 +1,3 @@
-aaaaaaaaaaaaaaaaaaaaaaaaaa = bbbbbbbbbbbbbbbbbbbb( # ccccccccccccccccccccccccccccccccccc
- d=0
+aaaaaaaaaaaaaaaaaaaaaaaaaa = (
+ bbbbbbbbbbbbbbbbbbbb(d=0) # ccccccccccccccccccccccccccccccccccc
)
--- first pass
+++ second pass
@@ -1,3 +1,3 @@
-aaaaaaaaaaaaaaaaaaaaaaaaaa = (
- bbbbbbbbbbbbbbbbbbbb(d=0) # ccccccccccccccccccccccccccccccccccc
-)
+aaaaaaaaaaaaaaaaaaaaaaaaaa = bbbbbbbbbbbbbbbbbbbb(
+ d=0
+) # ccccccccccccccccccccccccccccccccccc
Note: The error occurs if a
, b
, and c
are longer or shorter, so long as the line is >88 characters in total.
Got this error today.
Version: black, version 20.8b1
Mode(target_versions={<TargetVersion.PY38: 8>}, line_length=120, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -7,34 +7,34 @@
def test_serialize_to_json():
"""Test if a dictionary can be serialized to json string correctly."""
uuid_value = uuid.uuid4()
assert (
- serialize_to_json(
- {
- "string": "hello world",
- "decimal": Decimal(3.14159),
- "list": [1, 2, 3, 3.14159, Decimal(3.14159)],
- "uuid": uuid_value,
- "date": datetime(2020, 1, 1, 1, 1, 1),
- "nested_dict": {"boolean": True, "float": "3.14159", "tuple": (1, 2, 3)},
- }
- )
- == '{"string":"hello '
- 'world",'
- '"decimal":3.14158999999999988261834005243144929409027099609375,'
- '"list":[1,2,3,3.14159,3.14158999999999988261834005243144929409027099609375],'
- '"uuid":"' + str(uuid_value) + '","date":"2020-01-01T01:01:01+00:00",'
- '"nested_dict":{"boolean":true,"float":"3.14159","tuple":[1,2,3]}}'
+ serialize_to_json(
+ {
+ "string": "hello world",
+ "decimal": Decimal(3.14159),
+ "list": [1, 2, 3, 3.14159, Decimal(3.14159)],
+ "uuid": uuid_value,
+ "date": datetime(2020, 1, 1, 1, 1, 1),
+ "nested_dict": {"boolean": True, "float": "3.14159", "tuple": (1, 2, 3)},
+ }
+ )
+ == '{"string":"hello '
+ 'world",'
+ '"decimal":3.14158999999999988261834005243144929409027099609375,'
+ '"list":[1,2,3,3.14159,3.14158999999999988261834005243144929409027099609375],'
+ '"uuid":"' + str(uuid_value) + '","date":"2020-01-01T01:01:01+00:00",'
+ '"nested_dict":{"boolean":true,"float":"3.14159","tuple":[1,2,3]}}'
)
def test_deserialize_from_json():
"""Test if a json string can be deserialized to a dictionary correctly."""
assert (
- deserialize_from_json(
- """
+ deserialize_from_json(
+ """
{
"string": "hello world",
"decimal": 3.14158999999999988261834005243144929409027099609375,
"list": [
1,
@@ -54,18 +54,19 @@
3
]
}
}
"""
- ) == {
- 'date': datetime(2020, 1, 1, 1, 1, 1, tzinfo=timezone.utc),
- 'decimal': 3.14159,
- 'list': [1, 2, 3, 3.14159, 3.14159],
- 'nested_dict': {'boolean': True, 'float': '3.14159', 'tuple': [1, 2, 3]},
- 'string': 'hello world',
- 'uuid': '424984fc-485a-4063-8625-aea10a899ff5'
- }
+ )
+ == {
+ "date": datetime(2020, 1, 1, 1, 1, 1, tzinfo=timezone.utc),
+ "decimal": 3.14159,
+ "list": [1, 2, 3, 3.14159, 3.14159],
+ "nested_dict": {"boolean": True, "float": "3.14159", "tuple": [1, 2, 3]},
+ "string": "hello world",
+ "uuid": "424984fc-485a-4063-8625-aea10a899ff5",
+ }
)
def test_serialize_to_dict():
"""Test if a dictionary can be normalized to a serializable dictionary."""
--- first pass
+++ second pass
@@ -28,13 +28,12 @@
)
def test_deserialize_from_json():
"""Test if a json string can be deserialized to a dictionary correctly."""
- assert (
- deserialize_from_json(
- """
+ assert deserialize_from_json(
+ """
{
"string": "hello world",
"decimal": 3.14158999999999988261834005243144929409027099609375,
"list": [
1,
@@ -54,19 +53,17 @@
3
]
}
}
"""
- )
- == {
- "date": datetime(2020, 1, 1, 1, 1, 1, tzinfo=timezone.utc),
- "decimal": 3.14159,
- "list": [1, 2, 3, 3.14159, 3.14159],
- "nested_dict": {"boolean": True, "float": "3.14159", "tuple": [1, 2, 3]},
- "string": "hello world",
- "uuid": "424984fc-485a-4063-8625-aea10a899ff5",
- }
- )
+ ) == {
+ "date": datetime(2020, 1, 1, 1, 1, 1, tzinfo=timezone.utc),
+ "decimal": 3.14159,
+ "list": [1, 2, 3, 3.14159, 3.14159],
+ "nested_dict": {"boolean": True, "float": "3.14159", "tuple": [1, 2, 3]},
+ "string": "hello world",
+ "uuid": "424984fc-485a-4063-8625-aea10a899ff5",
+ }
def test_serialize_to_dict():
"""Test if a dictionary can be normalized to a serializable dictionary."""
def f():
return a(
b(
c(n)
#
),
[]
) + d(x, y)
gives this:
Mode(target_versions={<TargetVersion.PY36: 6>, <TargetVersion.PY38: 8>, <TargetVersion.PY37: 7>}, line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -1,9 +1,8 @@
def f():
return a(
b(
- c(n)
- #
- ),
- []
- ) + d(x, y)
-
+ c(n)
+ #
+ ),
+ [],
+ ) + d(x, y)
--- first pass
+++ second pass
@@ -1,8 +1,11 @@
def f():
- return a(
- b(
- c(n)
- #
- ),
- [],
- ) + d(x, y)
+ return (
+ a(
+ b(
+ c(n)
+ #
+ ),
+ [],
+ )
+ + d(x, y)
+ )
This happens with Black 20.8b1, but not 19.3b0. The first bad commit is 586d24236e6b57bc3b5da85fdbe2563835021076.
We have the same issue in Django with version 20.8b1: blk_j65piel1.log
Version 20.8b1 Log: blk_g8be276f.log
I don't have the diffs, but here are three segments that broke black. Version 20.8b1
import numpy as np
from metpy.testing import assert_almost_equal
from metpy.calc import height_to_geopotential
from metpy.units import units
def test_height_to_geopotential_32bit():
"""Test conversion to geopotential with 32-bit values."""
heights = np.linspace(20597, 20598, 11, dtype=np.float32) * units.m
truth = np.array([201336.67, 201337.66, 201338.62, 201339.61, 201340.58, 201341.56,
201342.53, 201343.52, 201344.48, 201345.44, 201346.42],
dtype=np.float32) * units('J/kg')
assert_almost_equal(height_to_geopotential(heights), truth, 2)
import numpy as np
import pytest
from metpy.testing import assert_array_almost_equal
from metpy.calc import get_layer
@pytest.mark.parametrize('flip_order', [(True, False)])
def test_get_layer_float32(flip_order):
"""Test that get_layer works properly with float32 data."""
p = np.asarray([940.85083008, 923.78851318, 911.42022705, 896.07220459,
876.89404297, 781.63330078], np.float32) * units('hPa')
hgt = np.asarray([563.671875, 700.93817139, 806.88098145, 938.51745605,
1105.25854492, 2075.04443359], dtype=np.float32) * units.meter
true_p_layer = np.asarray([940.85083008, 923.78851318, 911.42022705, 896.07220459,
876.89404297, 831.86472819], np.float32) * units('hPa')
true_hgt_layer = np.asarray([563.671875, 700.93817139, 806.88098145, 938.51745605,
1105.25854492, 1549.8079], dtype=np.float32) * units.meter
if flip_order:
p = p[::-1]
hgt = hgt[::-1]
p_layer, hgt_layer = get_layer(p, hgt, height=hgt, depth=1000. * units.meter)
assert_array_almost_equal(p_layer, true_p_layer, 4)
assert_array_almost_equal(hgt_layer, true_hgt_layer, 4)
from metpy.xarray import grid_deltas_from_dataarray
from metpy.calc import first_derivative
import numpy as np
from metpy.testing import assert_array_almost_equal
from metpy.units import units
def test_first_derivative_xarray_pint_conversion(test_da_lonlat):
"""Test first derivative with implicit xarray to pint quantity conversion."""
dx, _ = grid_deltas_from_dataarray(test_da_lonlat)
deriv = first_derivative(test_da_lonlat, delta=dx, axis=-1)
truth = np.array([[[-3.30782978e-06] * 4, [-3.42816074e-06] * 4, [-3.57012948e-06] * 4,
[-3.73759364e-06] * 4]] * 3) * units('kelvin / meter')
assert_array_almost_equal(deriv, truth, 12)
Here is a minimal example to reproduce the issue, with black version 20.8b1 (works with version 19.10b0)
def test_foo():
assert foo(
"foo",
) == [{"raw": {"person": "1"}, "error": "Invalid field unknown", "status": "error"}]
If I remove one char from last line, the issue disappears.
Here's another example + log, run against black, version 20.8b2.dev31+gdd2f86a (master at the time of writing).
my_string = '______________ %s _________________ %s _____________________________________________' % (
'____________', '________________', '___________________________________________')
I just encountered the following on a unittest of mine. Version: 20.8b1 The assert statement in combination with sorted is modified twice.
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -13,9 +13,12 @@
([(1, 2, 3), ("a", "b", "c")], [("a", "b", "c")], []),
([("a",)], [(1,), ("a",)], [(1,)]),
([], [(1,), ("a", "b")], [(1,), ("a", "b")]),
],
)
-def test_compare_row_lists(items_to_remove, target_to_remove_items_from, expected_results):
- assert sorted(compare_row_lists(basis=items_to_remove,
- to_reduce=target_to_remove_items_from), key=str) == sorted(expected_results,
- key=str)
+def test_compare_row_lists(
+ items_to_remove, target_to_remove_items_from, expected_results
+):
+ assert sorted(
+ compare_row_lists(basis=items_to_remove, to_reduce=target_to_remove_items_from),
+ key=str,
+ ) == sorted(expected_results, key=str)
--- first pass
+++ second pass
@@ -16,9 +16,14 @@
],
)
def test_compare_row_lists(
items_to_remove, target_to_remove_items_from, expected_results
):
- assert sorted(
- compare_row_lists(basis=items_to_remove, to_reduce=target_to_remove_items_from),
- key=str,
- ) == sorted(expected_results, key=str)
+ assert (
+ sorted(
+ compare_row_lists(
+ basis=items_to_remove, to_reduce=target_to_remove_items_from
+ ),
+ key=str,
+ )
+ == sorted(expected_results, key=str)
+ )```
FYI, running --fast
twice is not sufficient for many of the examples here, because a --diff
will still create a different result. However, reducing the maximum line length by one in the first pass seems to always work.
```python if True: if ( mission.get("status") not in [ "new", "test", "scheduled" ] and not mission.get("redeem") ): False if get_flag('WITH_PYMALLOC', lambda: impl == 'cp', warn=(impl == 'cp' and sys.version_info < (3, 8))) \ and sys.version_info < (3, 8): pass if get_flag('Py_UNICODE_SIZE', lambda: sys.maxunicode == 0x10ffff, expected=4, warn=(impl == 'cp' and sys.version_info < (3, 3))) \ and sys.version_info < (3, 3): pass should_list_installed = ( subcommand_name in ['show', 'uninstall'] and not current.startswith('-') ) assert ( xxxxxx( xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxx, ) == xxxxxx(xxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxx) ) if any( k in t for k in ["AAAAAAAAAA", "AAAAA", "AAAAAA", "AAAAAAAAA", "AAA", "AAAAAA", "AAAAAAAA", "AAA", "AAAAA", "AAAAA", "AAAA"] ) and not any(k in t for k in ["AAA"]): pass aaaaaaaaaaaaaaaaaaaaaaaaaa = bbbbbbbbbbbbbbbbbbbb( # ccccccccccccccccccccccccccccccccccc d=0 ) def f(): return a( b( c(n) # ), [] ) + d(x, y) my_string = '______________ %s _________________ %s _____________________________________________' % ( '____________', '________________', '___________________________________________') ```
Black version:
$ black --version
black, version 20.8b1
This will fail:
$ black --fast blacktest.py
$ black --fast blacktest.py
$ black --diff blacktest.py
```diff --- blacktest.py 2020-10-19 18:27:45.116011 +0000 +++ blacktest.py 2020-10-19 18:28:33.654679 +0000 @@ -2,74 +2,87 @@ if mission.get("status") not in ["new", "test", "scheduled"] and not mission.get( "redeem" ): False -if get_flag( - "WITH_PYMALLOC", - lambda: impl == "cp", - warn=(impl == "cp" and sys.version_info < (3, 8)), -) and sys.version_info < (3, 8): +if ( + get_flag( + "WITH_PYMALLOC", + lambda: impl == "cp", + warn=(impl == "cp" and sys.version_info < (3, 8)), + ) + and sys.version_info < (3, 8) +): pass -if get_flag( - "Py_UNICODE_SIZE", - lambda: sys.maxunicode == 0x10FFFF, - expected=4, - warn=(impl == "cp" and sys.version_info < (3, 3)), -) and sys.version_info < (3, 3): +if ( + get_flag( + "Py_UNICODE_SIZE", + lambda: sys.maxunicode == 0x10FFFF, + expected=4, + warn=(impl == "cp" and sys.version_info < (3, 3)), + ) + and sys.version_info < (3, 3) +): pass -should_list_installed = subcommand_name in [ - "show", - "uninstall", -] and not current.startswith("-") +should_list_installed = ( + subcommand_name + in [ + "show", + "uninstall", + ] + and not current.startswith("-") +) assert ( xxxxxx( xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxx, ) == xxxxxx(xxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxx) ) -if any( - k in t - for k in [ - "AAAAAAAAAA", - "AAAAA", - "AAAAAA", - "AAAAAAAAA", - "AAA", - "AAAAAA", - "AAAAAAAA", - "AAA", - "AAAAA", - "AAAAA", - "AAAA", - ] -) and not any(k in t for k in ["AAA"]): +if ( + any( + k in t + for k in [ + "AAAAAAAAAA", + "AAAAA", + "AAAAAA", + "AAAAAAAAA", + "AAA", + "AAAAAA", + "AAAAAAAA", + "AAA", + "AAAAA", + "AAAAA", + "AAAA", + ] + ) + and not any(k in t for k in ["AAA"]) +): pass -aaaaaaaaaaaaaaaaaaaaaaaaaa = ( - bbbbbbbbbbbbbbbbbbbb(d=0) # ccccccccccccccccccccccccccccccccccc -) +aaaaaaaaaaaaaaaaaaaaaaaaaa = bbbbbbbbbbbbbbbbbbbb( + d=0 +) # ccccccccccccccccccccccccccccccccccc def f(): - return a( - b( - c(n) - # - ), - [], - ) + d(x, y) + return ( + a( + b( + c(n) + # + ), + [], + ) + + d(x, y) + ) -my_string = ( - "______________ %s _________________ %s _____________________________________________" - % ( - "____________", - "________________", - "___________________________________________", - ) +my_string = "______________ %s _________________ %s _____________________________________________" % ( + "____________", + "________________", + "___________________________________________", ) ```
However, this will work fine:
$ black --fast -l 87 blacktest.py
reformatted blacktest.py
All done! ✨ 🍰 ✨
1 file reformatted.
$ black --fast blacktest.py
reformatted blacktest.py
All done! ✨ 🍰 ✨
1 file reformatted.
$ black --diff blacktest.py
All done! ✨ 🍰 ✨
1 file would be left unchanged.
So for a proper workaround, do the first --fast
black with -l 87
!
Running Black 20.8b1
Log: Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False) --- source +++ first pass @@ -57,12 +57,12 @@ base_path: If left blank, this will default to the path to the file that called this function. When frozen as EXE, this will always be the path to the Pyinstaller directory.
For use specifically with files compiled to windows EXE by pyinstaller.
"""
if base_path is None:
os.path.dirname((inspect.stack()[1][1])) # pylint: disable=invalid-name ) base_path = path_to_file_that_called_this_function if base_path == "": raise BlankAbsoluteResourcePathError() if is_frozen_as_exe(): --- first pass +++ second pass @@ -57,13 +57,13 @@ base_path: If left blank, this will default to the path to the file that called this function. When frozen as EXE, this will always be the path to the Pyinstaller directory.
For use specifically with files compiled to windows EXE by pyinstaller. """ if base_path is None:
Running black 20.8b1. Passing the --fast
flag solves this particular case.
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -11,10 +11,11 @@
from metpy.units import units
from metpy.testing import assert_almost_equal, assert_array_almost_equal
from hums.constants import AREA_ROOF, AREA_SHED, AREA_TANK
from hums.water import HydroVu, tank_volume, forecast_depth, forecast_volume
+
class TestHydroVu:
@pytest.fixture(scope="session")
def hydrovu(self):
"""Instantiate a HydroVu object to use in tests."""
@@ -155,26 +156,42 @@
test_file.unlink()
def test_tank_volume():
"""Test the tank_volume function"""
- depth = np.array([1.1, 1.0, 1.1, 0.9, 1.0, 1.1, 1.0]) * units('meter')
- truth = np.array([5039.41735, 4581.2885, 5039.41735, 4123.15965, 4581.2885, 5039.41735, 4581.2885]) * units('gallon')
+ depth = np.array([1.1, 1.0, 1.1, 0.9, 1.0, 1.1, 1.0]) * units("meter")
+ truth = np.array(
+ [
+ 5039.41735,
+ 4581.2885,
+ 5039.41735,
+ 4123.15965,
+ 4581.2885,
+ 5039.41735,
+ 4581.2885,
+ ]
+ ) * units("gallon")
actual = tank_volume(depth)
assert_array_almost_equal(actual, truth, decimal=6)
+
def test_forecast_depth():
"""Test the forecast_depth function"""
- precip_accum = np.array([0.05, 0.07, 0.07, 0.12, 0.34]) * units('inches')
- truth = np.array([0.0286736, 0.04014298, 0.04014298, 0.0688165, 0.1949802]) * units('feet')
+ precip_accum = np.array([0.05, 0.07, 0.07, 0.12, 0.34]) * units("inches")
+ truth = np.array([0.0286736, 0.04014298, 0.04014298, 0.0688165, 0.1949802]) * units(
+ "feet"
+ )
actual = forecast_depth(precip_accum)
assert_array_almost_equal(actual, truth)
+
def test_forecast_volume():
"""Test the forecast_volume function"""
- precip_accum = np.array([0.05, 0.07, 0.07, 0.12, 0.34]) * units('inches')
- truth = np.array([40.0392738, 56.0548996, 56.0548996, 96.0940617, 272.266671]) * units('gallon')
+ precip_accum = np.array([0.05, 0.07, 0.07, 0.12, 0.34]) * units("inches")
+ truth = np.array(
+ [40.0392738, 56.0548996, 56.0548996, 96.0940617, 272.266671]
+ ) * units("gallon")
actual = forecast_volume(precip_accum)
assert_array_almost_equal(actual, truth, decimal=3)
--- first pass
+++ second pass
@@ -157,21 +157,24 @@
def test_tank_volume():
"""Test the tank_volume function"""
depth = np.array([1.1, 1.0, 1.1, 0.9, 1.0, 1.1, 1.0]) * units("meter")
- truth = np.array(
- [
- 5039.41735,
- 4581.2885,
- 5039.41735,
- 4123.15965,
- 4581.2885,
- 5039.41735,
- 4581.2885,
- ]
- ) * units("gallon")
+ truth = (
+ np.array(
+ [
+ 5039.41735,
+ 4581.2885,
+ 5039.41735,
+ 4123.15965,
+ 4581.2885,
+ 5039.41735,
+ 4581.2885,
+ ]
+ )
+ * units("gallon")
+ )
actual = tank_volume(depth)
assert_array_almost_equal(actual, truth, decimal=6)
Version 20.8b1
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -3,12 +3,14 @@
class TestConvertPBX:
def test_process_single_row(self):
convert_pbx_command = Command()
- assert convert_pbx_command.process_single_row(3, {"SETUP": "2012-03-08 blabla",
- "DESTINATION": "+666",
- "OPERATOR": '<name>.pbx.dummy.com',
- "DURATION (Seconds)": 456.7}) == ("2012-03-08",
- "666", "<name>",
- 456.7, 3)
-
+ assert convert_pbx_command.process_single_row(
+ 3,
+ {
+ "SETUP": "2012-03-08 blabla",
+ "DESTINATION": "+666",
+ "OPERATOR": "<name>.pbx.dummy.com",
+ "DURATION (Seconds)": 456.7,
+ },
+ ) == ("2012-03-08", "666", "<name>", 456.7, 3)
--- first pass
+++ second pass
@@ -3,14 +3,17 @@
class TestConvertPBX:
def test_process_single_row(self):
convert_pbx_command = Command()
- assert convert_pbx_command.process_single_row(
- 3,
- {
- "SETUP": "2012-03-08 blabla",
- "DESTINATION": "+666",
- "OPERATOR": "<name>.pbx.dummy.com",
- "DURATION (Seconds)": 456.7,
- },
- ) == ("2012-03-08", "666", "<name>", 456.7, 3)
+ assert (
+ convert_pbx_command.process_single_row(
+ 3,
+ {
+ "SETUP": "2012-03-08 blabla",
+ "DESTINATION": "+666",
+ "OPERATOR": "<name>.pbx.dummy.com",
+ "DURATION (Seconds)": 456.7,
+ },
+ )
+ == ("2012-03-08", "666", "<name>", 456.7, 3)
+ )
Mode(target_versions={<TargetVersion.PY36: 6>}, line_length=100, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
def itu_r_468_weighted_torch(spec, n_fft, sr):
assert spec.ndim == 3
assert spec.shape[-1] == n_fft // 2 + 1
- return spec * torch.tensor([
- itu_r_468_weighting.filter.r468(f, "1khz", "factor")
- for f in librosa.fft_frequencies(sr, n_fft)], device=spec.device)[None, None]
+ return (
+ spec
+ * torch.tensor(
+ [
+ itu_r_468_weighting.filter.r468(f, "1khz", "factor")
+ for f in librosa.fft_frequencies(sr, n_fft)
+ ],
+ device=spec.device,
+ )[None, None]
+ )
def rand_shelv(rand, sr, min_cutoff, max_cutoff, min_q, max_q, min_g, max_g, t, data):
f = rand.uniform(min_cutoff, max_cutoff)
q = rand.uniform(min_q, max_q)
--- first pass
+++ second pass
@@ -395,20 +395,17 @@
def itu_r_468_weighted_torch(spec, n_fft, sr):
assert spec.ndim == 3
assert spec.shape[-1] == n_fft // 2 + 1
- return (
- spec
- * torch.tensor(
- [
- itu_r_468_weighting.filter.r468(f, "1khz", "factor")
- for f in librosa.fft_frequencies(sr, n_fft)
- ],
- device=spec.device,
- )[None, None]
- )
+ return spec * torch.tensor(
+ [
+ itu_r_468_weighting.filter.r468(f, "1khz", "factor")
+ for f in librosa.fft_frequencies(sr, n_fft)
+ ],
+ device=spec.device,
+ )[None, None]
def rand_shelv(rand, sr, min_cutoff, max_cutoff, min_q, max_q, min_g, max_g, t, data):
f = rand.uniform(min_cutoff, max_cutoff)
q = rand.uniform(min_q, max_q)
Encountered this today running black, version 20.8b1 (first time setup) on Mac error: cannot format reponame/filename.py: INTERNAL ERROR: Black produced different code on the second pass of the formatter. Please report a bug on https://github.com/psf/black/issues. This diff might be helpful: /var/folders/6t/my1ympps5vl3k1538gk9n_1h0000gp/T/blk__xaj_7vv.log Oh no! 💥 💔 💥 1 file failed to reformat.
Another example, if it helps. Version 20.8b1. The line causing this issue for me:
if output_format in ("Integer", "Decimal", "Keep numeric", "Time") or output_format.startswith("Date ("):
Attached log indicates that first pass produces
if output_format in (
"Integer",
"Decimal",
"Keep numeric",
"Time",
) or output_format.startswith("Date ("):
but that the second pass produces
if (
output_format
in (
"Integer",
"Decimal",
"Keep numeric",
"Time",
)
or output_format.startswith("Date (")
):
When manually changing the code to the value generated by the first pass, black doesn't complain anymore and indeed generates the code from the second pass.
Mode(target_versions=set(), line_length=99, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -117,14 +117,16 @@
emb["fields"].append(field)
# Cooldown handling
if cd := command._buckets._cooldown:
per = _("per")
- cooldown_text = (
- _("Can be used {} time(s) per {}").format(cd.rate, hum if (hum := chat.humanize_timedelta(seconds=cd.per)) else f"{int(cd.per)*1000}")
- + _(', per ' + cd.type.name if cd.type.name != 'default' else ' globally')
- )
+ cooldown_text = _("Can be used {} time(s) per {}").format(
+ cd.rate,
+ hum
+ if (hum := chat.humanize_timedelta(seconds=cd.per))
+ else f"{int(cd.per)*1000}",
+ ) + _(", per " + cd.type.name if cd.type.name != "default" else " globally")
field_cooldown = commands.help.EmbedField(
_("**__Cooldown:__**"), cooldown_text, False
)
emb["fields"].append(field_cooldown)
--- first pass
+++ second pass
@@ -117,16 +117,19 @@
emb["fields"].append(field)
# Cooldown handling
if cd := command._buckets._cooldown:
per = _("per")
- cooldown_text = _("Can be used {} time(s) per {}").format(
- cd.rate,
- hum
- if (hum := chat.humanize_timedelta(seconds=cd.per))
- else f"{int(cd.per)*1000}",
- ) + _(", per " + cd.type.name if cd.type.name != "default" else " globally")
+ cooldown_text = (
+ _("Can be used {} time(s) per {}").format(
+ cd.rate,
+ hum
+ if (hum := chat.humanize_timedelta(seconds=cd.per))
+ else f"{int(cd.per)*1000}",
+ )
+ + _(", per " + cd.type.name if cd.type.name != "default" else " globally")
+ )
field_cooldown = commands.help.EmbedField(
_("**__Cooldown:__**"), cooldown_text, False
)
emb["fields"].append(field_cooldown)
Black 20.8b1
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -61,11 +61,11 @@
num = len(timeslots)
message = ngettext(
"%(num)s cart delivery object generated.",
"%(num)s cart delivery objects generated.",
- num
+ num,
) % {"num": num}
self.message_user(request, message)
--- first pass
+++ second pass
@@ -58,15 +58,18 @@
def generate(self, request):
"""Generate cart deliveries for existing specific timeslots."""
timeslots = CartDelivery.objects.generate()
num = len(timeslots)
- message = ngettext(
- "%(num)s cart delivery object generated.",
- "%(num)s cart delivery objects generated.",
- num,
- ) % {"num": num}
+ message = (
+ ngettext(
+ "%(num)s cart delivery object generated.",
+ "%(num)s cart delivery objects generated.",
+ num,
+ )
+ % {"num": num}
+ )
self.message_user(request, message)
;-;
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -1,7 +1,12 @@
from __future__ import absolute_import
-from lark.exceptions import UnexpectedCharacters, UnexpectedInput, UnexpectedToken, ConfigurationError
+from lark.exceptions import (
+ UnexpectedCharacters,
+ UnexpectedInput,
+ UnexpectedToken,
+ ConfigurationError,
+)
import sys, os, pickle, hashlib
from io import open
import tempfile
from warnings import warn
@@ -15,22 +20,22 @@
from .parse_tree_builder import ParseTreeBuilder
from .parser_frontends import get_frontend, _get_lexer_callbacks
from .grammar import Rule
import re
+
try:
import regex
except ImportError:
regex = None
###{standalone
class LarkOptions(Serialize):
- """Specifies the options for Lark
-
- """
+ """Specifies the options for Lark"""
+
OPTIONS_DOC = """
**=== General Options ===**
start
The start symbol. Either a string, or a list of strings for multiple possible starts (Default: "start")
@@ -102,67 +107,70 @@
**=== End Options ===**
"""
if __doc__:
__doc__ += OPTIONS_DOC
-
# Adding a new option needs to be done in multiple places:
# - In the dictionary below. This is the primary truth of which options `Lark.__init__` accepts
# - In the docstring above. It is used both for the docstring of `LarkOptions` and `Lark`, and in readthedocs
# - In `lark-stubs/lark.pyi`:
# - As attribute to `LarkOptions`
# - As parameter to `Lark.__init__`
# - Potentially in `_LOAD_ALLOWED_OPTIONS` below this class, when the option doesn't change how the grammar is loaded
# - Potentially in `lark.tools.__init__`, if it makes sense, and it can easily be passed as a cmd argument
_defaults = {
- 'debug': False,
- 'keep_all_tokens': False,
- 'tree_class': None,
- 'cache': False,
- 'postlex': None,
- 'parser': 'earley',
- 'lexer': 'auto',
- 'transformer': None,
- 'start': 'start',
- 'priority': 'auto',
- 'ambiguity': 'auto',
- 'regex': False,
- 'propagate_positions': False,
- 'lexer_callbacks': {},
- 'maybe_placeholders': False,
- 'edit_terminals': None,
- 'g_regex_flags': 0,
- 'use_bytes': False,
- 'import_paths': [],
- 'source_path': None,
+ "debug": False,
+ "keep_all_tokens": False,
+ "tree_class": None,
+ "cache": False,
+ "postlex": None,
+ "parser": "earley",
+ "lexer": "auto",
+ "transformer": None,
+ "start": "start",
+ "priority": "auto",
+ "ambiguity": "auto",
+ "regex": False,
+ "propagate_positions": False,
+ "lexer_callbacks": {},
+ "maybe_placeholders": False,
+ "edit_terminals": None,
+ "g_regex_flags": 0,
+ "use_bytes": False,
+ "import_paths": [],
+ "source_path": None,
}
def __init__(self, options_dict):
o = dict(options_dict)
options = {}
for name, default in self._defaults.items():
if name in o:
value = o.pop(name)
- if isinstance(default, bool) and name not in ('cache', 'use_bytes'):
+ if isinstance(default, bool) and name not in ("cache", "use_bytes"):
value = bool(value)
else:
value = default
options[name] = value
- if isinstance(options['start'], STRING_TYPE):
- options['start'] = [options['start']]
-
- self.__dict__['options'] = options
-
- if not self.parser in ('earley', 'lalr', 'cyk', None):
- raise ConfigurationError(f"{self.parser} must be one of {', '.join(('earley', 'lalr', 'cyk', None))}")
-
- if self.parser == 'earley' and self.transformer:
- raise ValueError('Cannot specify an embedded transformer when using the Earley algorithm.'
- 'Please use your transformer on the resulting parse tree, or use a different algorithm (i.e. LALR)')
+ if isinstance(options["start"], STRING_TYPE):
+ options["start"] = [options["start"]]
+
+ self.__dict__["options"] = options
+
+ if not self.parser in ("earley", "lalr", "cyk", None):
+ raise ConfigurationError(
+ f"{self.parser} must be one of {', '.join(('earley', 'lalr', 'cyk', None))}"
+ )
+
+ if self.parser == "earley" and self.transformer:
+ raise ValueError(
+ "Cannot specify an embedded transformer when using the Earley algorithm."
+ "Please use your transformer on the resulting parse tree, or use a different algorithm (i.e. LALR)"
+ )
if o:
raise ValueError("Unknown options: %s" % o.keys())
def __getattr__(self, name):
@@ -183,14 +191,23 @@
return cls(data)
# Options that can be passed to the Lark parser, even when it was loaded from cache/standalone.
# These option are only used outside of `load_grammar`.
-_LOAD_ALLOWED_OPTIONS = {'postlex', 'transformer', 'use_bytes', 'debug', 'g_regex_flags', 'regex', 'propagate_positions', 'tree_class'}
-
-_VALID_PRIORITY_OPTIONS = ('auto', 'normal', 'invert', None)
-_VALID_AMBIGUITY_OPTIONS = ('auto', 'resolve', 'explicit', 'forest')
+_LOAD_ALLOWED_OPTIONS = {
+ "postlex",
+ "transformer",
+ "use_bytes",
+ "debug",
+ "g_regex_flags",
+ "regex",
+ "propagate_positions",
+ "tree_class",
+}
+
+_VALID_PRIORITY_OPTIONS = ("auto", "normal", "invert", None)
+_VALID_AMBIGUITY_OPTIONS = ("auto", "resolve", "explicit", "forest")
class Lark(Serialize):
"""Main interface for the library.
@@ -202,29 +219,32 @@
Example:
>>> Lark(r'''start: "foo" ''')
Lark(...)
"""
+
def __init__(self, grammar, **options):
self.options = LarkOptions(options)
# Set regex or re module
use_regex = self.options.regex
if use_regex:
if regex:
re_module = regex
else:
- raise ImportError('`regex` module must be installed if calling `Lark(regex=True)`.')
+ raise ImportError(
+ "`regex` module must be installed if calling `Lark(regex=True)`."
+ )
else:
re_module = re
# Some, but not all file-like objects have a 'name' attribute
if self.options.source_path is None:
try:
self.source_path = grammar.name
except AttributeError:
- self.source_path = '<string>'
+ self.source_path = "<string>"
else:
self.source_path = self.options.source_path
# Drain file-like objects to get their contents
try:
@@ -237,88 +257,119 @@
assert isinstance(grammar, STRING_TYPE)
self.source_grammar = grammar
if self.options.use_bytes:
if not isascii(grammar):
raise ValueError("Grammar must be ascii only, when use_bytes=True")
- if sys.version_info[0] == 2 and self.options.use_bytes != 'force':
- raise NotImplementedError("`use_bytes=True` may have issues on python2."
- "Use `use_bytes='force'` to use it at your own risk.")
+ if sys.version_info[0] == 2 and self.options.use_bytes != "force":
+ raise NotImplementedError(
+ "`use_bytes=True` may have issues on python2."
+ "Use `use_bytes='force'` to use it at your own risk."
+ )
cache_fn = None
if self.options.cache:
- if self.options.parser != 'lalr':
+ if self.options.parser != "lalr":
raise NotImplementedError("cache only works with parser='lalr' for now")
if isinstance(self.options.cache, STRING_TYPE):
cache_fn = self.options.cache
else:
if self.options.cache is not True:
raise ValueError("cache argument must be bool or str")
- unhashable = ('transformer', 'postlex', 'lexer_callbacks', 'edit_terminals')
+ unhashable = (
+ "transformer",
+ "postlex",
+ "lexer_callbacks",
+ "edit_terminals",
+ )
from . import __version__
- options_str = ''.join(k+str(v) for k, v in options.items() if k not in unhashable)
+
+ options_str = "".join(
+ k + str(v) for k, v in options.items() if k not in unhashable
+ )
s = grammar + options_str + __version__
md5 = hashlib.md5(s.encode()).hexdigest()
- cache_fn = tempfile.gettempdir() + '/.lark_cache_%s.tmp' % md5
+ cache_fn = tempfile.gettempdir() + "/.lark_cache_%s.tmp" % md5
if FS.exists(cache_fn):
- logger.debug('Loading grammar from cache: %s', cache_fn)
+ logger.debug("Loading grammar from cache: %s", cache_fn)
# Remove options that aren't relevant for loading from cache
- for name in (set(options) - _LOAD_ALLOWED_OPTIONS):
+ for name in set(options) - _LOAD_ALLOWED_OPTIONS:
del options[name]
- with FS.open(cache_fn, 'rb') as f:
+ with FS.open(cache_fn, "rb") as f:
self._load(f, **options)
return
- if self.options.lexer == 'auto':
- if self.options.parser == 'lalr':
- self.options.lexer = 'contextual'
- elif self.options.parser == 'earley':
- self.options.lexer = 'dynamic'
- elif self.options.parser == 'cyk':
- self.options.lexer = 'standard'
+ if self.options.lexer == "auto":
+ if self.options.parser == "lalr":
+ self.options.lexer = "contextual"
+ elif self.options.parser == "earley":
+ self.options.lexer = "dynamic"
+ elif self.options.parser == "cyk":
+ self.options.lexer = "standard"
else:
assert False, self.options.parser
lexer = self.options.lexer
- assert lexer in ('standard', 'contextual', 'dynamic', 'dynamic_complete') or issubclass(lexer, Lexer)
-
- if self.options.ambiguity == 'auto':
- if self.options.parser == 'earley':
- self.options.ambiguity = 'resolve'
+ assert lexer in (
+ "standard",
+ "contextual",
+ "dynamic",
+ "dynamic_complete",
+ ) or issubclass(lexer, Lexer)
+
+ if self.options.ambiguity == "auto":
+ if self.options.parser == "earley":
+ self.options.ambiguity = "resolve"
else:
- disambig_parsers = ['earley', 'cyk']
+ disambig_parsers = ["earley", "cyk"]
assert self.options.parser in disambig_parsers, (
- 'Only %s supports disambiguation right now') % ', '.join(disambig_parsers)
-
- if self.options.priority == 'auto':
- self.options.priority = 'normal'
+ "Only %s supports disambiguation right now"
+ ) % ", ".join(disambig_parsers)
+
+ if self.options.priority == "auto":
+ self.options.priority = "normal"
if self.options.priority not in _VALID_PRIORITY_OPTIONS:
- raise ValueError("invalid priority option: %r. Must be one of %r" % (self.options.priority, _VALID_PRIORITY_OPTIONS))
- assert self.options.ambiguity not in ('resolve__antiscore_sum', ), 'resolve__antiscore_sum has been replaced with the option priority="invert"'
+ raise ValueError(
+ "invalid priority option: %r. Must be one of %r"
+ % (self.options.priority, _VALID_PRIORITY_OPTIONS)
+ )
+ assert self.options.ambiguity not in (
+ "resolve__antiscore_sum",
+ ), 'resolve__antiscore_sum has been replaced with the option priority="invert"'
if self.options.ambiguity not in _VALID_AMBIGUITY_OPTIONS:
- raise ValueError("invalid ambiguity option: %r. Must be one of %r" % (self.options.ambiguity, _VALID_AMBIGUITY_OPTIONS))
+ raise ValueError(
+ "invalid ambiguity option: %r. Must be one of %r"
+ % (self.options.ambiguity, _VALID_AMBIGUITY_OPTIONS)
+ )
# Parse the grammar file and compose the grammars
- self.grammar = load_grammar(grammar, self.source_path, self.options.import_paths, self.options.keep_all_tokens)
+ self.grammar = load_grammar(
+ grammar,
+ self.source_path,
+ self.options.import_paths,
+ self.options.keep_all_tokens,
+ )
if self.options.postlex is not None:
terminals_to_keep = set(self.options.postlex.always_accept)
else:
terminals_to_keep = set()
# Compile the EBNF grammar into BNF
- self.terminals, self.rules, self.ignore_tokens = self.grammar.compile(self.options.start, terminals_to_keep)
+ self.terminals, self.rules, self.ignore_tokens = self.grammar.compile(
+ self.options.start, terminals_to_keep
+ )
if self.options.edit_terminals:
for t in self.terminals:
self.options.edit_terminals(t)
self._terminals_dict = {t.name: t for t in self.terminals}
# If the user asked to invert the priorities, negate them all here.
# This replaces the old 'resolve__antiscore_sum' option.
- if self.options.priority == 'invert':
+ if self.options.priority == "invert":
for rule in self.rules:
if rule.options.priority is not None:
rule.options.priority = -rule.options.priority
# Else, if the user asked to disable priorities, strip them from the
# rules. This allows the Earley parsers to skip an extra forest walk
@@ -327,48 +378,60 @@
for rule in self.rules:
if rule.options.priority is not None:
rule.options.priority = None
# TODO Deprecate lexer_callbacks?
- lexer_callbacks = (_get_lexer_callbacks(self.options.transformer, self.terminals)
- if self.options.transformer
- else {})
+ lexer_callbacks = (
+ _get_lexer_callbacks(self.options.transformer, self.terminals)
+ if self.options.transformer
+ else {}
+ )
lexer_callbacks.update(self.options.lexer_callbacks)
- self.lexer_conf = LexerConf(self.terminals, re_module, self.ignore_tokens, self.options.postlex, lexer_callbacks, self.options.g_regex_flags, use_bytes=self.options.use_bytes)
+ self.lexer_conf = LexerConf(
+ self.terminals,
+ re_module,
+ self.ignore_tokens,
+ self.options.postlex,
+ lexer_callbacks,
+ self.options.g_regex_flags,
+ use_bytes=self.options.use_bytes,
+ )
if self.options.parser:
self.parser = self._build_parser()
elif lexer:
self.lexer = self._build_lexer()
if cache_fn:
- logger.debug('Saving grammar to cache: %s', cache_fn)
- with FS.open(cache_fn, 'wb') as f:
+ logger.debug("Saving grammar to cache: %s", cache_fn)
+ with FS.open(cache_fn, "wb") as f:
self.save(f)
if __doc__:
__doc__ += "\n\n" + LarkOptions.OPTIONS_DOC
- __serialize_fields__ = 'parser', 'rules', 'options'
+ __serialize_fields__ = "parser", "rules", "options"
def _build_lexer(self):
return TraditionalLexer(self.lexer_conf)
def _prepare_callbacks(self):
self.parser_class = get_frontend(self.options.parser, self.options.lexer)
self._callbacks = None
# we don't need these callbacks if we aren't building a tree
- if self.options.ambiguity != 'forest':
+ if self.options.ambiguity != "forest":
self._parse_tree_builder = ParseTreeBuilder(
- self.rules,
- self.options.tree_class or Tree,
- self.options.propagate_positions,
- self.options.parser != 'lalr' and self.options.ambiguity == 'explicit',
- self.options.maybe_placeholders
- )
- self._callbacks = self._parse_tree_builder.create_callback(self.options.transformer)
+ self.rules,
+ self.options.tree_class or Tree,
+ self.options.propagate_positions,
+ self.options.parser != "lalr" and self.options.ambiguity == "explicit",
+ self.options.maybe_placeholders,
+ )
+ self._callbacks = self._parse_tree_builder.create_callback(
+ self.options.transformer
+ )
def _build_parser(self):
self._prepare_callbacks()
parser_conf = ParserConf(self.rules, self._callbacks, self.options.start)
return self.parser_class(self.lexer_conf, parser_conf, options=self.options)
@@ -377,11 +440,11 @@
"""Saves the instance into the given file object
Useful for caching and multiprocessing.
"""
data, m = self.memo_serialize([TerminalDef, Rule])
- pickle.dump({'data': data, 'memo': m}, f, protocol=pickle.HIGHEST_PROTOCOL)
+ pickle.dump({"data": data, "memo": m}, f, protocol=pickle.HIGHEST_PROTOCOL)
@classmethod
def load(cls, f):
"""Loads an instance from the given file object
@@ -393,26 +456,31 @@
def _load(self, f, **kwargs):
if isinstance(f, dict):
d = f
else:
d = pickle.load(f)
- memo = d['memo']
- data = d['data']
+ memo = d["memo"]
+ data = d["data"]
assert memo
- memo = SerializeMemoizer.deserialize(memo, {'Rule': Rule, 'TerminalDef': TerminalDef}, {})
- options = dict(data['options'])
+ memo = SerializeMemoizer.deserialize(
+ memo, {"Rule": Rule, "TerminalDef": TerminalDef}, {}
+ )
+ options = dict(data["options"])
if (set(kwargs) - _LOAD_ALLOWED_OPTIONS) & set(LarkOptions._defaults):
- raise ValueError("Some options are not allowed when loading a Parser: {}"
- .format(set(kwargs) - _LOAD_ALLOWED_OPTIONS))
+ raise ValueError(
+ "Some options are not allowed when loading a Parser: {}".format(
+ set(kwargs) - _LOAD_ALLOWED_OPTIONS
+ )
+ )
options.update(kwargs)
self.options = LarkOptions.deserialize(options, memo)
- self.rules = [Rule.deserialize(r, memo) for r in data['rules']]
- self.source_path = '<deserialized>'
+ self.rules = [Rule.deserialize(r, memo) for r in data["rules"]]
+ self.source_path = "<deserialized>"
self._prepare_callbacks()
self.parser = self.parser_class.deserialize(
- data['parser'],
+ data["parser"],
memo,
self._callbacks,
self.options, # Not all, but multiple attributes are used
)
self.terminals = self.parser.lexer_conf.tokens
@@ -420,11 +488,11 @@
return self
@classmethod
def _load_from_dict(cls, data, memo, **kwargs):
inst = cls.__new__(cls)
- return inst._load({'data': data, 'memo': memo}, **kwargs)
+ return inst._load({"data": data, "memo": memo}, **kwargs)
@classmethod
def open(cls, grammar_filename, rel_to=None, **options):
"""Create an instance of Lark with the grammar given by its filename
@@ -437,11 +505,11 @@
"""
if rel_to:
basepath = os.path.dirname(rel_to)
grammar_filename = os.path.join(basepath, grammar_filename)
- with open(grammar_filename, encoding='utf8') as f:
+ with open(grammar_filename, encoding="utf8") as f:
return cls(f, **options)
@classmethod
def open_from_package(cls, package, grammar_path, search_paths=("",), **options):
"""Create an instance of Lark with the grammar loaded from within the package `package`.
@@ -453,22 +521,25 @@
Lark.open_from_package(__name__, "example.lark", ("grammars",), parser=...)
"""
package = FromPackageLoader(package, search_paths)
full_path, text = package(None, grammar_path)
- options.setdefault('source_path', full_path)
- options.setdefault('import_paths', [])
- options['import_paths'].append(package)
+ options.setdefault("source_path", full_path)
+ options.setdefault("import_paths", [])
+ options["import_paths"].append(package)
return cls(text, **options)
def __repr__(self):
- return 'Lark(open(%r), parser=%r, lexer=%r, ...)' % (self.source_path, self.options.parser, self.options.lexer)
-
+ return "Lark(open(%r), parser=%r, lexer=%r, ...)" % (
+ self.source_path,
+ self.options.parser,
+ self.options.lexer,
+ )
def lex(self, text):
"Only lex (and postlex) the text, without parsing it. Only relevant when lexer='standard'"
- if not hasattr(self, 'lexer'):
+ if not hasattr(self, "lexer"):
self.lexer = self._build_lexer()
stream = self.lexer.lex(text)
if self.options.postlex:
return self.options.postlex.process(stream)
return stream
@@ -507,34 +578,44 @@
raise e
if isinstance(e, UnexpectedCharacters):
# If user didn't change the character position, then we should
if p == s.line_ctr.char_pos:
- s.line_ctr.feed(s.text[p:p+1])
+ s.line_ctr.feed(s.text[p : p + 1])
try:
return e.puppet.resume_parse()
except UnexpectedToken as e2:
- if isinstance(e, UnexpectedToken) and e.token.type == e2.token.type == '$END' and e.puppet == e2.puppet:
+ if (
+ isinstance(e, UnexpectedToken)
+ and e.token.type == e2.token.type == "$END"
+ and e.puppet == e2.puppet
+ ):
# Prevent infinite loop
raise e2
e = e2
except UnexpectedCharacters as e2:
e = e2
@property
def source(self):
- warn("Lark.source attribute has been renamed to Lark.source_path", DeprecationWarning)
+ warn(
+ "Lark.source attribute has been renamed to Lark.source_path",
+ DeprecationWarning,
+ )
return self.source_path
@source.setter
def source(self, value):
self.source_path = value
@property
def grammar_source(self):
- warn("Lark.grammar_source attribute has been renamed to Lark.source_grammar", DeprecationWarning)
+ warn(
+ "Lark.grammar_source attribute has been renamed to Lark.source_grammar",
+ DeprecationWarning,
+ )
return self.source_grammar
@grammar_source.setter
def grammar_source(self, value):
self.source_grammar = value
--- first pass
+++ second pass
@@ -306,16 +306,20 @@
elif self.options.parser == "cyk":
self.options.lexer = "standard"
else:
assert False, self.options.parser
lexer = self.options.lexer
- assert lexer in (
- "standard",
- "contextual",
- "dynamic",
- "dynamic_complete",
- ) or issubclass(lexer, Lexer)
+ assert (
+ lexer
+ in (
+ "standard",
+ "contextual",
+ "dynamic",
+ "dynamic_complete",
+ )
+ or issubclass(lexer, Lexer)
+ )
if self.options.ambiguity == "auto":
if self.options.parser == "earley":
self.options.ambiguity = "resolve"
else:
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -59,13 +59,13 @@
# Make sure "f" works too
assert _pipe_command(
self.ls, R' python -m ret "(\w+)\..{4}" f -g 1'
) == "\n".join(("poetry", "pyproject"))
assert _pipe_command(
- self.ls, R' python -m ret "(?P<some_long_group_name>\w+)\..{4}" f -g some_long_group_name'
+ self.ls,
+ R' python -m ret "(?P<some_long_group_name>\w+)\..{4}" f -g some_long_group_name',
) == "\n".join(("poetry", "pyproject"))
-
def test_search(self):
assert _pipe_command(
self.ls, R'python -m ret "LICENSE" search'
) == _pipe_command(self.ls, _grep_on_all("LICENSE"))
--- first pass
+++ second pass
@@ -58,14 +58,17 @@
# Make sure "f" works too
assert _pipe_command(
self.ls, R' python -m ret "(\w+)\..{4}" f -g 1'
) == "\n".join(("poetry", "pyproject"))
- assert _pipe_command(
- self.ls,
- R' python -m ret "(?P<some_long_group_name>\w+)\..{4}" f -g some_long_group_name',
- ) == "\n".join(("poetry", "pyproject"))
+ assert (
+ _pipe_command(
+ self.ls,
+ R' python -m ret "(?P<some_long_group_name>\w+)\..{4}" f -g some_long_group_name',
+ )
+ == "\n".join(("poetry", "pyproject"))
+ )
def test_search(self):
assert _pipe_command(
self.ls, R'python -m ret "LICENSE" search'
) == _pipe_command(self.ls, _grep_on_all("LICENSE"))
Not a problem for our repo, since this was caught by me missing a folder in our black-ignore. But if this is helpful for debugging this problem then that'd be great. The diff is quite large at around 400 lines so I put it in a gist. Also run black with --verbose and put it below.
diff https://gist.github.com/xylix/9b282167786bc7ba9447ab11db9a6c5b
stacktrace
(venv) ~/C/robotframework-browser (install-browsers-in-site-packages)> black -v /Users/kerkko/Code/robotframework-browser/Browser/wrapper/node_modules/playwright/.local-browsers/webkit-1383/JavaScriptCore.framework/Versions/A/PrivateHeaders/generate_objc_backend_dispatcher_implementation.py
Using configuration from /Users/kerkko/Code/robotframework-browser/Browser/pyproject.toml.
Traceback (most recent call last):
File "/Users/kerkko/Code/robotframework-browser/venv/lib/python3.9/site-packages/black/__init__.py", line 670, in reformat_one
if changed is not Changed.CACHED and format_file_in_place(
File "/Users/kerkko/Code/robotframework-browser/venv/lib/python3.9/site-packages/black/__init__.py", line 813, in format_file_in_place
dst_contents = format_file_contents(src_contents, fast=fast, mode=mode)
File "/Users/kerkko/Code/robotframework-browser/venv/lib/python3.9/site-packages/black/__init__.py", line 940, in format_file_contents
assert_stable(src_contents, dst_contents, mode=mode)
File "/Users/kerkko/Code/robotframework-browser/venv/lib/python3.9/site-packages/black/__init__.py", line 6170, in assert_stable
raise AssertionError(
AssertionError: INTERNAL ERROR: Black produced different code on the second pass of the formatter. Please report a bug on https://github.com/psf/black/issues. This diff might be helpful: /var/folders/xz/6y6sty192sbg2m3bb623z6hw0000gn/T/blk_d46pcptu.log
error: cannot format /Users/kerkko/Code/robotframework-browser/Browser/wrapper/node_modules/playwright/.local-browsers/webkit-1383/JavaScriptCore.framework/Versions/A/PrivateHeaders/generate_objc_backend_dispatcher_implementation.py: INTERNAL ERROR: Black produced different code on the second pass of the formatter. Please report a bug on https://github.com/psf/black/issues. This diff might be helpful: /var/folders/xz/6y6sty192sbg2m3bb623z6hw0000gn/T/blk_d46pcptu.log
Oh no! 💥 💔 💥
1 file failed to reformat
Mode(target_versions={<TargetVersion.PY36: 6>}, line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -38,27 +38,30 @@
TableReference = namedtuple(
"TableReference", ["schema", "name", "alias", "is_function"]
)
TableReference.ref = property(
lambda self: self.alias
- or (
- self.name
- if self.name.islower() or self.name[0] == '"'
- else '"' + self.name + '"'
- )
+ or (
+ self.name
+ if self.name.islower() or self.name[0] == '"'
+ else '"' + self.name + '"'
+ )
)
# This code is borrowed from sqlparse example script.
# <url>
def is_subselect(parsed):
if not parsed.is_group:
return False
for item in parsed.tokens:
- if (
- item.ttype is DML
- and item.value.upper() in ("SELECT", "INSERT", "UPDATE", "CREATE", "DELETE")
+ if item.ttype is DML and item.value.upper() in (
+ "SELECT",
+ "INSERT",
+ "UPDATE",
+ "CREATE",
+ "DELETE",
):
return True
return False
@@ -81,22 +84,27 @@
# StopIteration. So we need to ignore the keyword if the keyword
# FROM.
# Also 'SELECT * FROM abc JOIN def' will trigger this elif
# condition. So we need to ignore the keyword JOIN and its variants
# INNER JOIN, FULL OUTER JOIN, etc.
- elif item.ttype is Keyword and (not item.value.upper() == "FROM") and (
- not item.value.upper().endswith("JOIN")
+ elif (
+ item.ttype is Keyword
+ and (not item.value.upper() == "FROM")
+ and (not item.value.upper().endswith("JOIN"))
):
tbl_prefix_seen = False
else:
yield item
elif item.ttype is Keyword or item.ttype is Keyword.DML:
item_val = item.value.upper()
- if (
- item_val in ("COPY", "FROM", "INTO", "UPDATE", "TABLE")
- or item_val.endswith("JOIN")
- ):
+ if item_val in (
+ "COPY",
+ "FROM",
+ "INTO",
+ "UPDATE",
+ "TABLE",
+ ) or item_val.endswith("JOIN"):
tbl_prefix_seen = True
# 'SELECT a, FROM abc' will detect FROM as part of the column list.
# So this check here is necessary.
elif isinstance(item, IdentifierList):
for identifier in item.get_identifiers():
@@ -138,12 +146,12 @@
# Sometimes Keywords (such as FROM ) are classified as
# identifiers which don't have the get_real_name() method.
try:
schema_name = identifier.get_parent_name()
real_name = identifier.get_real_name()
- is_function = (
- allow_functions and _identifier_is_function(identifier)
+ is_function = allow_functions and _identifier_is_function(
+ identifier
)
except AttributeError:
continue
if real_name:
yield TableReference(
@@ -212,11 +220,11 @@
# print(id.get_real_name())
# columns.append(id)
# print(dir(id))
# print('name:%s, parent_name:%s real_name:%s' %
# (columns[0].get_name(), columns[0].get_parent_name(), columns[0].get_real_name()))
-if __name__ == '__main__':
+if __name__ == "__main__":
# parse()
sql = """
select * from devops.ApplyTest_testlist join devops.ApplyTest_testdeploy ATt on ApplyTest_testlist.id = ATt.testlist_id; """
res = extract_tables(sql=sql)
--- first pass
+++ second pass
@@ -94,17 +94,21 @@
tbl_prefix_seen = False
else:
yield item
elif item.ttype is Keyword or item.ttype is Keyword.DML:
item_val = item.value.upper()
- if item_val in (
- "COPY",
- "FROM",
- "INTO",
- "UPDATE",
- "TABLE",
- ) or item_val.endswith("JOIN"):
+ if (
+ item_val
+ in (
+ "COPY",
+ "FROM",
+ "INTO",
+ "UPDATE",
+ "TABLE",
+ )
+ or item_val.endswith("JOIN")
+ ):
tbl_prefix_seen = True
# 'SELECT a, FROM abc' will detect FROM as part of the column list.
# So this check here is necessary.
elif isinstance(item, IdentifierList):
for identifier in item.get_identifiers():
Non-related parts removed; Black 20.08b1
Python 3.8.5
Workaround with --fast
works.
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -34,17 +34,20 @@
- if donate := FinancialDonate.get(
- category=category_id,
- student=student_id,
- ) is not None:
+ if (
+ donate := FinancialDonate.get(
+ category=category_id,
+ student=student_id,
+ )
+ is not None
+ ):
with db_session:
- donate.summ = donate.summ + summ,
- donate.update_date = datetime.now(),
+ donate.summ = (donate.summ + summ,)
+ donate.update_date = (datetime.now(),)
return donate
with db_session:
return FinancialDonate(
category_id=category_id,
student_id=student_id,
@@ -116,11 +119,13 @@
-def get_or_create_finances_category(group_id: int, name: str, summ: int) -> FinancialCategory:
+def get_or_create_finances_category(
+ group_id: int, name: str, summ: int
+) -> FinancialCategory:
--- first pass
+++ second pass
@@ -40,11 +40,11 @@
donate := FinancialDonate.get(
category=category_id,
student=student_id,
)
is not None
- ):
+ ) :
with db_session:
donate.summ = (donate.summ + summ,)
donate.update_date = (datetime.now(),)
return donate
with db_session:
Black basically missed things. I think the style may be inconsistent... I prefer no whitespace before the colon but eh, it's black
@ThatXliner, it's a bug. One that is already fixed in the development branch by commit https://github.com/psf/black/commit/1d2d7264ec7c448744b771910cc972da03b1cb80, now it's just waiting to be included in the next release.
Attaching a log I recently experienced, running black --fast <file>
twice works to resolve the issue.
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -321,12 +321,12 @@
@property
def dirty(self):
now = datetime.now()
# type: ignore
return Recipe.dirty.fget(self) or self.age > max(
- now - datetime.fromtimestamp(req.stat().st_mtime)
- for req in self.requires)
+ now - datetime.fromtimestamp(req.stat().st_mtime) for req in self.requires
+ )
@property
def age(self) -> timedelta:
if not self.output.exists():
return timedelta.max
@@ -512,21 +512,33 @@
def target(self, f):
@MethodAttributes.wraps(f)
async def wrapper(*args, **kwargs):
result = await async_wrap(f, *args, **kwargs)
- assert result is not None, "Target definition for '%s' didn't return a value." % f.__name__
+ assert result is not None, (
+ "Target definition for '%s' didn't return a value." % f.__name__
+ )
if is_iterable(result):
results = list(result)
- assert all(
- isinstance(obj, Recipe) for obj in results
- ), ("Target definition for '%s' returned an iterable containing non-Recipe values (e.g. '%s')." % (
- f.__name__, next(type(obj).__qualname__ for obj in result if not isinstance(obj, Recipe))))
+ assert all(isinstance(obj, Recipe) for obj in results), (
+ "Target definition for '%s' returned an iterable containing non-Recipe values (e.g. '%s')."
+ % (
+ f.__name__,
+ next(
+ type(obj).__qualname__
+ for obj in result
+ if not isinstance(obj, Recipe)
+ ),
+ )
+ )
result = Recipe(f.__name__, results, isinstance(result, tuple))
assert isinstance(
result, Recipe
- ), "Target definition for '%s' returned a non-Recipe value ('%s')." % (f.__name__, type(result).__qualname__)
+ ), "Target definition for '%s' returned a non-Recipe value ('%s')." % (
+ f.__name__,
+ type(result).__qualname__,
+ )
result.origin = f.__name__
return result
attrs = MethodAttributes.for_method(wrapper, True, True)
attrs.put(TARGET_ATTR)
--- first pass
+++ second pass
@@ -517,20 +517,19 @@
assert result is not None, (
"Target definition for '%s' didn't return a value." % f.__name__
)
if is_iterable(result):
results = list(result)
- assert all(isinstance(obj, Recipe) for obj in results), (
- "Target definition for '%s' returned an iterable containing non-Recipe values (e.g. '%s')."
- % (
- f.__name__,
- next(
- type(obj).__qualname__
- for obj in result
- if not isinstance(obj, Recipe)
- ),
- )
+ assert all(
+ isinstance(obj, Recipe) for obj in results
+ ), "Target definition for '%s' returned an iterable containing non-Recipe values (e.g. '%s')." % (
+ f.__name__,
+ next(
+ type(obj).__qualname__
+ for obj in result
+ if not isinstance(obj, Recipe)
+ ),
)
result = Recipe(f.__name__, results, isinstance(result, tuple))
assert isinstance(
result, Recipe
), "Target definition for '%s' returned a non-Recipe value ('%s')." % (
Wait, the --fast option just skips the determistic checking, though. Right?
Correct, but after the second pass it becomes stable and can be linted
without --fast
.
On Sat, Dec 12, 2020, 9:21 AM ThatXliner notifications@github.com wrote:
Wait, the --fast option just skips the determistic checking, though. Right?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/psf/black/issues/1629#issuecomment-743786751, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEPRJ7VQ63I73RNBEQYS53SUORCPANCNFSM4QL5QZSQ .
Passing --fast
to Black isn't fully safe as Black can produce invalid code. --fast
disables the unstable formatting check, the AST equivalence check, and the valid python code check. Black can accidentally destroy a valid Python program; here's an example with --fast
.
This is true, it is certainly not a fix. But in my case, for this one file, it produced semantically identical code after "stabilizing", and let me move on with using Black normally.
On Sat, Dec 12, 2020, 10:54 AM Richard Si notifications@github.com wrote:
Passing --fast to Black isn't fully safe as Black can produce invalid code. --fast disables the unstable formatting check, the AST equivalence check, and the valid python code check. Black can accidentally destroy a valid Python program; here's an example with --fast https://black.now.sh/?version=stable&state=_Td6WFoAAATm1rRGAgAhARYAAAB0L-Wj4ACdAHBdAD2IimZxl1N_Wk2dwftOLUrsSjKoBlrvNB7l7P-6rF_SkDWmZI2C0c3VYgLdOTVDVENmN-rJDtUcvMGwCmSaHlEjw7vRCCi9s7cWv0jxpNf7wW495JR8Z-MnHa5iZsy3U0t4cVTyTK20DdL0z1kumG8A2kTgmCMfki8AAYwBngEAAG7VGsuxxGf7AgAAAAAEWVo= .
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/psf/black/issues/1629#issuecomment-743799782, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEPRJZD7FGWH4XGRMUDUF3SUO35PANCNFSM4QL5QZSQ .
INTERNAL ERROR: Black produced different code on the second pass of the formatter. Please report a bug on https://github.com/psf/black/issues. This diff might be helpful: /tmp/blk_pk9o3j1v.log
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -1,6 +1,8 @@
def fn():
if True:
if True:
- if fn(a.form['password'].encode('utf-8'), signin_user['password'].encode('utf-8')) == \
- c['d'].e('f'):
+ if fn(
+ a.form["password"].encode("utf-8"),
+ signin_user["password"].encode("utf-8"),
+ ) == c["d"].e("f"):
pass
--- first pass
+++ second pass
@@ -1,8 +1,11 @@
def fn():
if True:
if True:
- if fn(
- a.form["password"].encode("utf-8"),
- signin_user["password"].encode("utf-8"),
- ) == c["d"].e("f"):
+ if (
+ fn(
+ a.form["password"].encode("utf-8"),
+ signin_user["password"].encode("utf-8"),
+ )
+ == c["d"].e("f")
+ ):
pass
black==20.8b1
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -9,114 +9,130 @@
from simple_history.models import HistoricalRecords
def diff_objects(old_instance, new_instance, fields):
"""
- Diff two objects by examining the given fields and
- return a string.
+ Diff two objects by examining the given fields and
+ return a string.
"""
full_diff = []
for field in fields:
field_diff = []
old_value = getattr(old_instance, field.attname)
new_value = getattr(new_instance, field.attname)
- for line in difflib.unified_diff(str(old_value).split('\n'),
- str(new_value).split('\n'),
- fromfile=field.attname,
- tofile=field.attname,
- lineterm=""):
+ for line in difflib.unified_diff(
+ str(old_value).split("\n"),
+ str(new_value).split("\n"),
+ fromfile=field.attname,
+ tofile=field.attname,
+ lineterm="",
+ ):
field_diff.append(line)
full_diff.extend(field_diff)
return "\n".join(full_diff)
def history_email_for(instance, title):
"""
- Generate the subject and email body that is sent via
- email notifications post update!
+ Generate the subject and email body that is sent via
+ email notifications post update!
"""
history = instance.history.latest()
subject = _("UPDATE: %(model_name)s #%(pk)d - %(title)s") % {
- 'model_name': instance.__class__.__name__,
- 'pk': instance.pk,
- 'title': title
+ "model_name": instance.__class__.__name__,
+ "pk": instance.pk,
+ "title": title,
}
- body = _("""Updated on %(history_date)s
+ body = (
+ _(
+ """Updated on %(history_date)s
Updated by %(username)s
%(diff)s
For more information:
-%(instance_url)s""") % {'history_date': history.history_date.strftime('%c'),
- 'username': getattr(history.history_user, 'username', ''),
- 'diff': history.history_change_reason,
- 'instance_url': instance.get_full_url()}
+%(instance_url)s"""
+ )
+ % {
+ "history_date": history.history_date.strftime("%c"),
+ "username": getattr(history.history_user, "username", ""),
+ "diff": history.history_change_reason,
+ "instance_url": instance.get_full_url(),
+ }
+ )
return subject, body
class KiwiHistoricalRecords(HistoricalRecords):
"""
- This class will keep track of what fields were changed
- inside of the ``history_change_reason`` field. This gives us
- a crude changelog until upstream introduces their new interface.
+ This class will keep track of what fields were changed
+ inside of the ``history_change_reason`` field. This gives us
+ a crude changelog until upstream introduces their new interface.
"""
+
def pre_save(self, instance, **kwargs):
"""
- Signal handlers don't have access to the previous version of
- an object so we have to load it from the database!
+ Signal handlers don't have access to the previous version of
+ an object so we have to load it from the database!
"""
- if kwargs.get('raw', False):
+ if kwargs.get("raw", False):
return
- if instance.pk and hasattr(instance, 'history'):
- instance.previous = instance.__class__.objects.filter(pk=instance.pk).first()
+ if instance.pk and hasattr(instance, "history"):
+ instance.previous = instance.__class__.objects.filter(
+ pk=instance.pk
+ ).first()
def post_save(self, instance, created, using=None, **kwargs):
"""
- Calculate the changelog and call the inherited method to
- write the data into the database.
+ Calculate the changelog and call the inherited method to
+ write the data into the database.
"""
- if kwargs.get('raw', False):
+ if kwargs.get("raw", False):
return
- if hasattr(instance, 'previous') and instance.previous:
+ if hasattr(instance, "previous") and instance.previous:
# note: simple_history.utils.update_change_reason() performs an extra
# DB query so it is better to use the private field instead!
# In older simple_history version this field wasn't private but was renamed
# in 2.10.0 hence the pylint disable!
instance._change_reason = diff_objects( # pylint: disable=protected-access
- instance.previous, instance, self.fields_included(instance))
+ instance.previous, instance, self.fields_included(instance)
+ )
super().post_save(instance, created, using, **kwargs)
def finalize(self, sender, **kwargs):
"""
- Connect the pre_save signal handler after calling the inherited method.
+ Connect the pre_save signal handler after calling the inherited method.
"""
super().finalize(sender, **kwargs)
signals.pre_save.connect(self.pre_save, sender=sender, weak=False)
class ReadOnlyHistoryAdmin(SimpleHistoryAdmin):
"""
- Custom history admin which shows all fields
- as read-only.
+ Custom history admin which shows all fields
+ as read-only.
"""
- history_list_display = ['Diff']
+
+ history_list_display = ["Diff"]
def Diff(self, obj): # pylint: disable=invalid-name
- return safe('<pre>%s</pre>' % obj.history_change_reason)
+ return safe("<pre>%s</pre>" % obj.history_change_reason)
def get_readonly_fields(self, request, obj=None):
# make all fields readonly
- readonly_fields = list(set(
- [field.name for field in self.opts.local_fields] +
- [field.name for field in self.opts.local_many_to_many]
- ))
+ readonly_fields = list(
+ set(
+ [field.name for field in self.opts.local_fields]
+ + [field.name for field in self.opts.local_many_to_many]
+ )
+ )
return readonly_fields
def response_change(self, request, obj):
super().response_change(request, obj)
return HttpResponseRedirect(obj.get_absolute_url())
--- first pass
+++ second pass
@@ -44,27 +44,24 @@
"model_name": instance.__class__.__name__,
"pk": instance.pk,
"title": title,
}
- body = (
- _(
- """Updated on %(history_date)s
+ body = _(
+ """Updated on %(history_date)s
Updated by %(username)s
%(diff)s
For more information:
%(instance_url)s"""
- )
- % {
- "history_date": history.history_date.strftime("%c"),
- "username": getattr(history.history_user, "username", ""),
- "diff": history.history_change_reason,
- "instance_url": instance.get_full_url(),
- }
- )
+ ) % {
+ "history_date": history.history_date.strftime("%c"),
+ "username": getattr(history.history_user, "username", ""),
+ "diff": history.history_change_reason,
+ "instance_url": instance.get_full_url(),
+ }
return subject, body
class KiwiHistoricalRecords(HistoricalRecords):
"""
Please find below the generated log when try to format this file
def process_import(file_import, format="csv"):
# ignore files that are not csv
if file_import.provider in [Provider.PAYLINE, Provider.ADYEN] and not file_import.path.endswith("csv"):
return
Mode(target_versions={<TargetVersion.PY36: 6>}, line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -1,5 +1,7 @@
-
def process_import(file_import, format="csv"):
# ignore files that are not csv
- if file_import.provider in [Provider.PAYLINE, Provider.ADYEN] and not file_import.path.endswith("csv"):
+ if file_import.provider in [
+ Provider.PAYLINE,
+ Provider.ADYEN,
+ ] and not file_import.path.endswith("csv"):
return
--- first pass
+++ second pass
@@ -1,7 +1,11 @@
def process_import(file_import, format="csv"):
# ignore files that are not csv
- if file_import.provider in [
- Provider.PAYLINE,
- Provider.ADYEN,
- ] and not file_import.path.endswith("csv"):
+ if (
+ file_import.provider
+ in [
+ Provider.PAYLINE,
+ Provider.ADYEN,
+ ]
+ and not file_import.path.endswith("csv")
+ ):
return
Hello! I used Black in pre-commit hook (rev: 20.8b1) and got: error: cannot format /home/path/to/tests/test_task1.py: INTERNAL ERROR: Black produced different code on the second pass of the formatter. blk_krk51l0g.log
OS: Linux Debian 10. IDE: VS Code v.1.52.0.
Here's my log file.
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -192,26 +192,28 @@
word_ends = []
# check if there are any tokens that need to be split into words.
if _split_precondition(tokens, words):
- tokens_to_split = set(['),', ').'])
+ tokens_to_split = set(["),", ")."])
word_i = 0
token_i = 0
while word_i < len(words) and token_i < len(tokens):
if (
words[word_i].lower().endswith(tokens[token_i].lower().strip())
and tokens[token_i].strip()
):
word_ends.append(tokens[token_i])
word_i += 1
- elif tokens[token_i] in tokens_to_split and (words[word_i], words[word_i+1]) == tuple(tokens[token_i]):
+ elif tokens[token_i] in tokens_to_split and (
+ words[word_i],
+ words[word_i + 1],
+ ) == tuple(tokens[token_i]):
word_ends.append(tokens[token_i])
word_ends.append(tokens[token_i])
word_i += 1
-
token_i += 1
else:
word_i = 0
token_i = 0
--- first pass
+++ second pass
@@ -203,14 +203,18 @@
words[word_i].lower().endswith(tokens[token_i].lower().strip())
and tokens[token_i].strip()
):
word_ends.append(tokens[token_i])
word_i += 1
- elif tokens[token_i] in tokens_to_split and (
- words[word_i],
- words[word_i + 1],
- ) == tuple(tokens[token_i]):
+ elif (
+ tokens[token_i] in tokens_to_split
+ and (
+ words[word_i],
+ words[word_i + 1],
+ )
+ == tuple(tokens[token_i])
+ ):
word_ends.append(tokens[token_i])
word_ends.append(tokens[token_i])
word_i += 1
token_i += 1
If you find a case of this, please attach the generated log here so we can investigate.
The --fast
workaround is OK for now.
$ black --version
black, version 20.8b1
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -1,5 +1,13 @@
for i in range(registers_count):
- msg = 'HALF_PAGE = %2g INSTANCE = %2g NAME = %s ADDR = h%0X - %g | RDATA = h%0X | EDATA = h%0X' % \
- (half_pages_array[i], instances_array[i], name_array[i],
- address_array[i], address_array[i], rdata_array[i],
- edata_array[i])
+ msg = (
+ "HALF_PAGE = %2g INSTANCE = %2g NAME = %s ADDR = h%0X - %g | RDATA = h%0X | EDATA = h%0X"
+ % (
+ half_pages_array[i],
+ instances_array[i],
+ name_array[i],
+ address_array[i],
+ address_array[i],
+ rdata_array[i],
+ edata_array[i],
+ )
+ )
--- first pass
+++ second pass
@@ -1,13 +1,10 @@
for i in range(registers_count):
- msg = (
- "HALF_PAGE = %2g INSTANCE = %2g NAME = %s ADDR = h%0X - %g | RDATA = h%0X | EDATA = h%0X"
- % (
- half_pages_array[i],
- instances_array[i],
- name_array[i],
- address_array[i],
- address_array[i],
- rdata_array[i],
- edata_array[i],
- )
+ msg = "HALF_PAGE = %2g INSTANCE = %2g NAME = %s ADDR = h%0X - %g | RDATA = h%0X | EDATA = h%0X" % (
+ half_pages_array[i],
+ instances_array[i],
+ name_array[i],
+ address_array[i],
+ address_array[i],
+ rdata_array[i],
+ edata_array[i],
)
I found this error:
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -3,8 +3,15 @@
# -*- coding: utf-8 -*-
class Foo(object):
def bar(self):
- x = 'foo ' + \
- 'subscription {} resource_group {} Site {}, foobar_name {}. Error {}'.format(
- self._subscription, self._resource_group, self._site, self._foobar_name, error)
+ x = (
+ "foo "
+ + "subscription {} resource_group {} Site {}, foobar_name {}. Error {}".format(
+ self._subscription,
+ self._resource_group,
+ self._site,
+ self._foobar_name,
+ error,
+ )
+ )
--- first pass
+++ second pass
@@ -3,15 +3,12 @@
# -*- coding: utf-8 -*-
class Foo(object):
def bar(self):
- x = (
- "foo "
- + "subscription {} resource_group {} Site {}, foobar_name {}. Error {}".format(
- self._subscription,
- self._resource_group,
- self._site,
- self._foobar_name,
- error,
- )
+ x = "foo " + "subscription {} resource_group {} Site {}, foobar_name {}. Error {}".format(
+ self._subscription,
+ self._resource_group,
+ self._site,
+ self._foobar_name,
+ error,
)
From the code:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
class Foo(object):
def bar(self):
x = 'foo ' + \
'subscription {} resource_group {} Site {}, foobar_name {}. Error {}'.format(
self._subscription, self._resource_group, self._site, self._foobar_name, error)
Case 1:
Mode(target_versions={<TargetVersion.PY37: 7>, <TargetVersion.PY36: 6>, <TargetVersion.PY38: 8>}, line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -1,13 +1,8 @@
class A:
def a():
- return (
- A(aaaaaa) | Aaaaaaaaaaaa(
- lambda aaaaaaa: aaaaaaaaa.Aaaaaa(
- aaaaaa=aaaaaa.Aaaaaaaaaaaaaaaaaaaaaa.aaaaaaaa,
- aaaaa=aaaaaaaaa.Aaaaaa(
- a,
- a
- )
- )
- ).a(a)
- )
+ return A(aaaaaa) | Aaaaaaaaaaaa(
+ lambda aaaaaaa: aaaaaaaaa.Aaaaaa(
+ aaaaaa=aaaaaa.Aaaaaaaaaaaaaaaaaaaaaa.aaaaaaaa,
+ aaaaa=aaaaaaaaa.Aaaaaa(a, a),
+ )
+ ).a(a)
--- first pass
+++ second pass
@@ -1,8 +1,11 @@
class A:
def a():
- return A(aaaaaa) | Aaaaaaaaaaaa(
- lambda aaaaaaa: aaaaaaaaa.Aaaaaa(
- aaaaaa=aaaaaa.Aaaaaaaaaaaaaaaaaaaaaa.aaaaaaaa,
- aaaaa=aaaaaaaaa.Aaaaaa(a, a),
- )
- ).a(a)
+ return (
+ A(aaaaaa)
+ | Aaaaaaaaaaaa(
+ lambda aaaaaaa: aaaaaaaaa.Aaaaaa(
+ aaaaaa=aaaaaa.Aaaaaaaaaaaaaaaaaaaaaa.aaaaaaaa,
+ aaaaa=aaaaaaaaa.Aaaaaa(a, a),
+ )
+ ).a(a)
+ )
Case 2:
Mode(target_versions={<TargetVersion.PY37: 7>, <TargetVersion.PY36: 6>, <TargetVersion.PY38: 8>}, line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -1,4 +1,8 @@
def a():
assert a(
- aaaaaaaaaaaa=aaa, aaaaaaaaaaa=aaa, aaaaaaaaaaaaaa=aaa, aaaaaaaaaaaaa=aaa, aaaaaaaaaaaaaaaaa=aaaa
+ aaaaaaaaaaaa=aaa,
+ aaaaaaaaaaa=aaa,
+ aaaaaaaaaaaaaa=aaa,
+ aaaaaaaaaaaaa=aaa,
+ aaaaaaaaaaaaaaaaa=aaaa,
) == (0, 0)
--- first pass
+++ second pass
@@ -1,8 +1,11 @@
def a():
- assert a(
- aaaaaaaaaaaa=aaa,
- aaaaaaaaaaa=aaa,
- aaaaaaaaaaaaaa=aaa,
- aaaaaaaaaaaaa=aaa,
- aaaaaaaaaaaaaaaaa=aaaa,
- ) == (0, 0)
+ assert (
+ a(
+ aaaaaaaaaaaa=aaa,
+ aaaaaaaaaaa=aaa,
+ aaaaaaaaaaaaaa=aaa,
+ aaaaaaaaaaaaa=aaa,
+ aaaaaaaaaaaaaaaaa=aaaa,
+ )
+ == (0, 0)
+ )
This is a rare problem that we're currently investigating. The most common case of it has to do with a combination of magic trailing commas and optional parentheses. Long story short, there's this behavior:
The expected behavior is that there should be no difference between the first formatting and the second formatting. In practice Black sometimes chooses for or against optional parentheses differently depending on whether the line should be exploded or not. This is what needs fixing.
Workaround
We're working on fixing this, until then, format the file twice with
--fast
, it will keep its formatting moving forward.Call To Action
If you find a case of this, please attach the generated log here so we can investigate. We've already added three identifying examples of this as expected failures to https://github.com/psf/black/pull/1627/commits/25206d8cc6e98143f0b10bcbe9e8b41b8b543abe.
Finally, if you're interested in debugging this yourself, look for
should_explode
inif
statements in the Black codebase. Those are the decisions that lead to unstable formatting.