Closed ambv closed 3 years ago
Simple example :
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -1,2 +1,5 @@
def keys(self):
- return set(self._data.keys()) - {"membersOfSupportGroup_mails", "membersOfSupportGroup_uids"} | {"membersOfSupportGroup"}
+ return set(self._data.keys()) - {
+ "membersOfSupportGroup_mails",
+ "membersOfSupportGroup_uids",
+ } | {"membersOfSupportGroup"}
--- first pass
+++ second pass
@@ -1,5 +1,9 @@
def keys(self):
- return set(self._data.keys()) - {
- "membersOfSupportGroup_mails",
- "membersOfSupportGroup_uids",
- } | {"membersOfSupportGroup"}
+ return (
+ set(self._data.keys())
+ - {
+ "membersOfSupportGroup_mails",
+ "membersOfSupportGroup_uids",
+ }
+ | {"membersOfSupportGroup"}
+ )
Another simple example with numpy arrays (I removed some parts that were non-problematic)
Mode(target_versions=set(), line_length=120, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -31,38 +31,39 @@
def test_get_vertices(box_geometry: BoxGeometry):
vertices = box_geometry.vertices
- expected_vertices = np.array([[-0.5, 0.5, -0.5],
- [0.5, 0.5, -0.5],
- [0.5, 0.5, 0.5],
- [-0.5, 0.5, 0.5],
- [-0.5, -0.5, -0.5],
- [0.5, -0.5, -0.5],
- [0.5, -0.5, 0.5],
- [-0.5, -0.5, 0.5]]) + [1, 0, 0]
+ expected_vertices = np.array(
+ [
+ [-0.5, 0.5, -0.5],
+ [0.5, 0.5, -0.5],
+ [0.5, 0.5, 0.5],
+ [-0.5, 0.5, 0.5],
+ [-0.5, -0.5, -0.5],
+ [0.5, -0.5, -0.5],
+ [0.5, -0.5, 0.5],
+ [-0.5, -0.5, 0.5],
+ ]
+ ) + [1, 0, 0]
assert vertices.shape == (8, 3)
assert np.array_equal(vertices, expected_vertices)
--- first pass
+++ second pass
@@ -48,22 +48,25 @@
def test_get_vertices(box_geometry: BoxGeometry):
vertices = box_geometry.vertices
- expected_vertices = np.array(
- [
- [-0.5, 0.5, -0.5],
- [0.5, 0.5, -0.5],
- [0.5, 0.5, 0.5],
- [-0.5, 0.5, 0.5],
- [-0.5, -0.5, -0.5],
- [0.5, -0.5, -0.5],
- [0.5, -0.5, 0.5],
- [-0.5, -0.5, 0.5],
- ]
- ) + [1, 0, 0]
+ expected_vertices = (
+ np.array(
+ [
+ [-0.5, 0.5, -0.5],
+ [0.5, 0.5, -0.5],
+ [0.5, 0.5, 0.5],
+ [-0.5, 0.5, 0.5],
+ [-0.5, -0.5, -0.5],
+ [0.5, -0.5, -0.5],
+ [0.5, -0.5, 0.5],
+ [-0.5, -0.5, 0.5],
+ ]
+ )
+ + [1, 0, 0]
+ )
assert vertices.shape == (8, 3)
assert np.array_equal(vertices, expected_vertices)
Pulling more information in from #1954, which appears to be a duplicate of this issue.
$ black --version
black, version 20.8b1
--- source
+++ first pass
@@ -1 +1,17 @@
-assert function(arg1, arg2, arg3, arg4, arg5, arg6, arg7, arg8, arg9, arg10, arg11, arg12, arg13, arg14, arg15) != [None]
+assert function(
+ arg1,
+ arg2,
+ arg3,
+ arg4,
+ arg5,
+ arg6,
+ arg7,
+ arg8,
+ arg9,
+ arg10,
+ arg11,
+ arg12,
+ arg13,
+ arg14,
+ arg15,
+) != [None]
--- first pass
+++ second pass
@@ -1,17 +1,20 @@
-assert function(
- arg1,
- arg2,
- arg3,
- arg4,
- arg5,
- arg6,
- arg7,
- arg8,
- arg9,
- arg10,
- arg11,
- arg12,
- arg13,
- arg14,
- arg15,
-) != [None]
+assert (
+ function(
+ arg1,
+ arg2,
+ arg3,
+ arg4,
+ arg5,
+ arg6,
+ arg7,
+ arg8,
+ arg9,
+ arg10,
+ arg11,
+ arg12,
+ arg13,
+ arg14,
+ arg15,
+ )
+ != [None]
+)
Adding an example here, I cannot provide the full source file:
black, version 20.8b1
Mode(target_versions={<TargetVersion.PY38: 8>}, line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
- self.bom_standard.loc[(self.bom_standard[end_prod_col] == i) &
- (self.bom_standard[
- sub_asm_col] == j), 'subassm_lvl_1_num'] = \
- subasm_unique.index(j) + 1
- self.bom_standard.loc[(self.bom_standard[end_prod_col] == i) &
- (self.bom_standard[
- sub_asm_col] == j), 'part_lvl_num'] = \
- range(1, 1 + self.bom_standard.loc[
+ self.bom_standard.loc[
+ (self.bom_standard[end_prod_col] == i)
+ & (self.bom_standard[sub_asm_col] == j),
+ "subassm_lvl_1_num",
+ ] = (subasm_unique.index(j) + 1)
+ self.bom_standard.loc[
+ (self.bom_standard[end_prod_col] == i)
+ & (self.bom_standard[sub_asm_col] == j),
+ "part_lvl_num",
+ ] = range(
+++ second pass
@@ -481,11 +481,13 @@
self.bom_standard.loc[
(self.bom_standard[end_prod_col] == i)
& (self.bom_standard[sub_asm_col] == j),
"subassm_lvl_1_num",
- ] = (subasm_unique.index(j) + 1)
+ ] = (
+ subasm_unique.index(j) + 1
+ )
self.bom_standard.loc[
(self.bom_standard[end_prod_col] == i)
& (self.bom_standard[sub_asm_col] == j),
"part_lvl_num",
] = range(
Fuzz-testing seems to have uncovered an input string that surfaces second-pass formatting problems during an automated build:
https://github.com/psf/black/runs/1834260212#step:5:62
Falsifying example: test_idempotent_any_syntatically_valid_python(
src_contents='class A:\\\r\n# type: ignore\n pass\n',
mode=Mode(target_versions=set(), line_length=88, string_normalization=False, magic_trailing_comma=True, experimental_string_processing=False, is_pyi=False),
)
@jayaddison that's already tracked via #1913.
Thanks @ichard26 - that's almost true, but in the logs for that build, the fuzzer really did generate an input that reproduces #1629 (INTERNAL ERROR: Black produced different code on the second pass of the formatter.
).
It's slightly surprising, but is valid and relevant here, I think. #1913 seems to relate to the fuzzer exposing a separate libcst
-related error.
I had thought that this issue was mostly focused on new and should_explode
related instability bugs, not all black produced different code on the second pass of the formatter
errors, but IDK and IDC now. Also, while in the beginning #1913 exposed a hypothesmith issue (libcst wasn't related at all), the issue was reopened for the class A:\\\r\n# type: ignore\n pass\n
issue.
Thanks @ichard26; my mistake previously (not reading enough context, as usual). You're correct and I've encountered another occurrence of 'class A:\\\r\n# type: ignore\n pass\n'
, which I'll track against #1913.
Same happens here with black 20.8b1:
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -417,11 +417,14 @@
context.error(
f"Type mismatch, expected {target.type} got {stack_types[-1]}!"
)"""
else:
if target.type != return_type:
- if target.type in [VariableType.BYTE, VariableType.INT] and return_type in [VariableType.BYTE, VariableType.INT]:
+ if target.type in [
+ VariableType.BYTE,
+ VariableType.INT,
+ ] and return_type in [VariableType.BYTE, VariableType.INT]:
return_type = target.type
context.free_register(r0)
context.free_register(r1)
--- first pass
+++ second pass
@@ -417,14 +417,18 @@
context.error(
f"Type mismatch, expected {target.type} got {stack_types[-1]}!"
)"""
else:
if target.type != return_type:
- if target.type in [
- VariableType.BYTE,
- VariableType.INT,
- ] and return_type in [VariableType.BYTE, VariableType.INT]:
+ if (
+ target.type
+ in [
+ VariableType.BYTE,
+ VariableType.INT,
+ ]
+ and return_type in [VariableType.BYTE, VariableType.INT]
+ ):
return_type = target.type
context.free_register(r0)
context.free_register(r1)
Version: black, version 20.8b1
(after fast mode, it works)
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -44,12 +44,14 @@
def _output_of(cmd: str) -> str:
return subprocess.check_output(shlex.split(cmd)).decode().strip()
+
def do_test_for(type_of_match: FlagEnum) -> None:
...
+
class TestClass:
ls = 'echo "%s"' % "\n".join(
(
"LICENSE",
@@ -70,13 +72,13 @@
# Make sure "f" works too
assert _pipe_command(
self.ls, R' python -m ret "(\w+)\..{4}" f -g 1'
) == "\n".join(("poetry", "pyproject"))
assert _pipe_command(
- self.ls, R' python -m ret "(?P<some_long_group_name>\w+)\..{4}" f -g some_long_group_name'
+ self.ls,
+ R' python -m ret "(?P<some_long_group_name>\w+)\..{4}" f -g some_long_group_name',
) == "\n".join(("poetry", "pyproject"))
-
def test_search(self):
assert _pipe_command(
self.ls, R'python -m ret "LICENSE" search'
) == _pipe_command(self.ls, _grep_on_all("LICENSE"))
@@ -127,12 +129,11 @@
"setup.cfg",
"tests.yes",
)
)
assert (
- _pipe_command(als, R' python -m ret "L\w+" f -a --ascii')
- == "LC\nLREADME"
+ _pipe_command(als, R' python -m ret "L\w+" f -a --ascii') == "LC\nLREADME"
)
assert (
_pipe_command(self.ls, 'python -m ret "LICENSE" f -m --multiline')
== "LICENSE"
)
--- first pass
+++ second pass
@@ -71,14 +71,17 @@
# Make sure "f" works too
assert _pipe_command(
self.ls, R' python -m ret "(\w+)\..{4}" f -g 1'
) == "\n".join(("poetry", "pyproject"))
- assert _pipe_command(
- self.ls,
- R' python -m ret "(?P<some_long_group_name>\w+)\..{4}" f -g some_long_group_name',
- ) == "\n".join(("poetry", "pyproject"))
+ assert (
+ _pipe_command(
+ self.ls,
+ R' python -m ret "(?P<some_long_group_name>\w+)\..{4}" f -g some_long_group_name',
+ )
+ == "\n".join(("poetry", "pyproject"))
+ )
def test_search(self):
assert _pipe_command(
self.ls, R'python -m ret "LICENSE" search'
) == _pipe_command(self.ls, _grep_on_all("LICENSE"))
If anyone has time and interest in testing a potential fix for second-pass formatting instability, I'd welcome your feedback on https://github.com/psf/black/pull/1958.
As always, please read the changes and make sure you're comfortable with them before checking them out and running them locally, as you should (ideally) for any other code. And feel free to ask in the pull request if anything is unclear. I can't promise to give answers to everything but I'll try.
I've used some of the examples from this thread during development and have had promising results so far, but in particular I'd be interested to hear about any counter-examples that continue to break.
If you're still looking for submissions, here's mine:
However, it works in the proposed PR
Found this simple example with Python 2 code:
class Test:
"""comment
"""
def __init__(self):
print "test"
Result:
error: cannot format test.py: INTERNAL ERROR: Black produced code that is not equivalent to the source.
--- src
+++ dst
@@ -7,11 +7,11 @@
value=
Constant(
kind=
None, # NoneType
value=
- b'comment\n ', # bytes
+ b'comment', # bytes
) # /Constant
) # /Expr
FunctionDef(
args=
arguments(
@chris-t-w that one is different from most of the ones covered here since it's about docstrings rather than line splitting. It may be easier to handle in a new issue.
If you're interested, it's probably also not going to be too difficult to fix, we just have to relax the "not equivalent to the source" check to allow more whitespace changes in docstrings.
An example equivalent to something that happened naturally in some code (version 20.8b1
):
Mode(target_versions=set(), line_length=80, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -227,11 +227,15 @@
- xx = funstuff(sthnsaotheusnthoeasnuthsnt, saotehusaontehusnotheusn, soantehusnoatheusnth) + [x, y, z, w]
+ xx = funstuff(
+ sthnsaotheusnthoeasnuthsnt,
+ saotehusaontehusnotheusn,
+ soantehusnoatheusnth,
+ ) + [x, y, z, w]
--- first pass
+++ second pass
@@ -227,15 +227,18 @@
- xx = funstuff(
- sthnsaotheusnthoeasnuthsnt,
- saotehusaontehusnotheusn,
- soantehusnoatheusnth,
- ) + [x, y, z, w]
+ xx = (
+ funstuff(
+ sthnsaotheusnthoeasnuthsnt,
+ saotehusaontehusnotheusn,
+ soantehusnoatheusnth,
+ )
+ + [x, y, z, w]
+ )
Here's an example I came across (black version is 20.8b1
):
Here's an example I have encountered (version is 20.8b1
)
The workaround with --fast
did work.
[adrian@eluvian:/tmp]> black --version
black, version 20.8b1
[adrian@eluvian:/tmp]> black schemas.py
error: cannot format schemas.py: INTERNAL ERROR: Black produced different code on the second pass of the formatter. Please report a bug on https://github.com/psf/black/issues. This diff might be helpful: /tmp/blk_a12e03ze.log
Oh no! 💥 💔 💥
1 file failed to reformat.
[adrian@eluvian:/tmp]> cat schemas.py
class PaperRevisionTimelineField(Field):
def _serialize(self, value, attr, obj, **kwargs):
if not value:
return []
serialized = []
user = self.context.get('user')
review_comment_schema = PaperReviewCommentSchema(context=self.context)
review_schema = PaperReviewSchema(context=self.context)
for timeline_item in value:
if timeline_item.timeline_item_type in ('comment', 'review') and not timeline_item.can_view(user):
continue
serialized_item = {'timeline_item_type': timeline_item.timeline_item_type}
if timeline_item.timeline_item_type == 'comment':
serialized_item.update(review_comment_schema.dump(timeline_item))
elif timeline_item.timeline_item_type == 'review':
serialized_item.update(review_schema.dump(timeline_item))
serialized.append(serialized_item)
return serialized
[adrian@eluvian:/tmp]> cat /tmp/blk_a12e03ze.log
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -2,19 +2,22 @@
def _serialize(self, value, attr, obj, **kwargs):
if not value:
return []
serialized = []
- user = self.context.get('user')
+ user = self.context.get("user")
review_comment_schema = PaperReviewCommentSchema(context=self.context)
review_schema = PaperReviewSchema(context=self.context)
for timeline_item in value:
- if timeline_item.timeline_item_type in ('comment', 'review') and not timeline_item.can_view(user):
+ if timeline_item.timeline_item_type in (
+ "comment",
+ "review",
+ ) and not timeline_item.can_view(user):
continue
- serialized_item = {'timeline_item_type': timeline_item.timeline_item_type}
- if timeline_item.timeline_item_type == 'comment':
+ serialized_item = {"timeline_item_type": timeline_item.timeline_item_type}
+ if timeline_item.timeline_item_type == "comment":
serialized_item.update(review_comment_schema.dump(timeline_item))
- elif timeline_item.timeline_item_type == 'review':
+ elif timeline_item.timeline_item_type == "review":
serialized_item.update(review_schema.dump(timeline_item))
serialized.append(serialized_item)
return serialized
--- first pass
+++ second pass
@@ -6,14 +6,18 @@
serialized = []
user = self.context.get("user")
review_comment_schema = PaperReviewCommentSchema(context=self.context)
review_schema = PaperReviewSchema(context=self.context)
for timeline_item in value:
- if timeline_item.timeline_item_type in (
- "comment",
- "review",
- ) and not timeline_item.can_view(user):
+ if (
+ timeline_item.timeline_item_type
+ in (
+ "comment",
+ "review",
+ )
+ and not timeline_item.can_view(user)
+ ):
continue
serialized_item = {"timeline_item_type": timeline_item.timeline_item_type}
if timeline_item.timeline_item_type == "comment":
serialized_item.update(review_comment_schema.dump(timeline_item))
Encountered this error with 20.8b1 with a very long string with trailing %-formatting. The current minimum I've got is:
def a():
return func(A, 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' % B) + f(x)
Which gives the debug log:
--- source
+++ first pass
@@ -1,3 +1,5 @@
-
def a():
- return func(A, 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' % B) + f(x)
+ return func(
+ A,
+ "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" % B,
+ ) + f(x)
--- first pass
+++ second pass
@@ -1,5 +1,9 @@
def a():
- return func(
- A,
- "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" % B,
- ) + f(x)
+ return (
+ func(
+ A,
+ "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
+ % B,
+ )
+ + f(x)
+ )
What makes the error go away such that I apparently can't cause it again by lengthening the string:
(apologies if this is an issue for this error in specific cases, rather than general occurrences of this error)
Edit: Minor extra note that plainly running black --fast
twice didn't work because of the cache, so needed to touch
it in between runs
Got error on version 20.8b1, but with line-length set to 120
if int(self.cleaned_data.get('xxxxx')) in [
C, D] and self.cleaned_data.get('xxxxx') in [None, 0]:
self._errors["xxxxx"] = self.error_class([forms.Field.default_error_messages['xxxxx']])
if 'xxxxx' in cleaned_data:
cleaned_data.pop("xxxxx", None)
if int(self.cleaned_data.get('xxxxx')) in [
A, B] and self.cleaned_data.get('xxxxx') in [None, 0]:
self._errors["xxxxx"] = self.error_class([forms.Field.default_error_messages['xxxxx']])
if 'xxxxx' in cleaned_data:
cleaned_data.pop("xxxxx", None)
This gives the debug log:
+++ first pass
- if int(self.cleaned_data.get('xxxxx')) in [
- C, D] and self.cleaned_data.get('xxxxx') in [None, 0]:
- self._errors["xxxxx"] = self.error_class([forms.Field.default_error_messages['xxxxx']])
- if 'xxxxx' in cleaned_data:
+ if int(self.cleaned_data.get("xxxxx")) in [C, D] and self.cleaned_data.get(
+ "xxxxx"
+ ) in [None, 0]:
+ self._errors["xxxxx"] = self.error_class([forms.Field.default_error_messages["xxxxx"]])
+ if "xxxxx" in cleaned_data:
cleaned_data.pop("xxxxx", None)
- if int(self.cleaned_data.get('xxxxx')) in [
- A, B] and self.cleaned_data.get('xxxxx') in [None, 0]:
- self._errors["xxxxx"] = self.error_class([forms.Field.default_error_messages['xxxxx']])
- if 'xxxxx' in cleaned_data:
+ if int(self.cleaned_data.get("xxxxx")) in [
+ A,
+ B,
+ ] and self.cleaned_data.get("xxxxx") in [None, 0]:
+ self._errors["xxxxx"] = self.error_class([forms.Field.default_error_messages["xxxxx"]])
+ if "xxxxx" in cleaned_data:
cleaned_data.pop("xxxxx", None)
return cleaned_data
--- first pass
+++ second pass
@@ -2979,14 +2979,18 @@
) in [None, 0]:
self._errors["xxxxx"] = self.error_class([forms.Field.default_error_messages["xxxxx"]])
if "xxxxx" in cleaned_data:
cleaned_data.pop("xxxxx", None)
- if int(self.cleaned_data.get("xxxxx")) in [
- A,
- B,
- ] and self.cleaned_data.get("xxxxx") in [None, 0]:
+ if (
+ int(self.cleaned_data.get("xxxxx"))
+ in [
+ A,
+ B,
+ ]
+ and self.cleaned_data.get("xxxxx") in [None, 0]
+ ):
self._errors["xxxxx"] = self.error_class([forms.Field.default_error_messages["xxxxx"]])
if "xxxxx" in cleaned_data:
cleaned_data.pop("xxxxx", None)
return cleaned_data
Found two solutions to this bug which is substituting if condition1 and condition2:
to either:
if all([condition1, condition2]):
or
if condition1:
if condition2:
Found this bug on version 20.8b1
while depreciation_date <= end_date:
- depreciation_date = (
- date(
- depreciation_date.year,
- depreciation_date.month,
- depreciation_date.day
- )
- + relativedelta(months=+self.method_period)
- )
+ depreciation_date = date(
+ depreciation_date.year,
+ depreciation_date.month,
+ depreciation_date.day,
+ ) + relativedelta(months=+self.method_period)
--- first pass
+++ second pass
while depreciation_date <= end_date:
- depreciation_date = date(
- depreciation_date.year,
- depreciation_date.month,
- depreciation_date.day,
- ) + relativedelta(months=+self.method_period)
+ depreciation_date = (
+ date(
+ depreciation_date.year,
+ depreciation_date.month,
+ depreciation_date.day,
+ )
+ + relativedelta(months=+self.method_period)
+ )
Found an example which may be related:
def some_function():
assert sorted(
(f["aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"], f["aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"],
f["aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"])
for f in []
) == [
"AAAAAAAA", "BBBBBBB"
]
diff:
Mode(target_versions={<TargetVersion.PY36: 6>}, line_length=120, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -1,9 +1,9 @@
-
def some_function():
assert sorted(
- (f["aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"], f["aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"],
- f["aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"])
+ (
+ f["aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"],
+ f["aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"],
+ f["aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"],
+ )
for f in []
- ) == [
- "AAAAAAAA", "BBBBBBB"
- ]
+ ) == ["AAAAAAAA", "BBBBBBB"]
--- first pass
+++ second pass
@@ -1,9 +1,12 @@
def some_function():
- assert sorted(
- (
- f["aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"],
- f["aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"],
- f["aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"],
+ assert (
+ sorted(
+ (
+ f["aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"],
+ f["aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"],
+ f["aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"],
+ )
+ for f in []
)
- for f in []
- ) == ["AAAAAAAA", "BBBBBBB"]
+ == ["AAAAAAAA", "BBBBBBB"]
+ )
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -222,21 +222,18 @@
self.assertEqual(item_chunkset.all_chunks, set([success_chunk_id]))
# -- Ingest Items --
- ingest_cmd = (
- IngestItemsArgs(
- item_chunkset_uri=item_chunkset_uri,
- error_records_uri=error_records_uri,
- etl_sas=etl_sas,
- limit=None,
- insert_only=False,
- num_processes=None
- ).to_args()
- + ["--log-uri", os.path.join(INGEST_ITEMS_COMMAND, 'log.txt')]
- )
+ ingest_cmd = IngestItemsArgs(
+ item_chunkset_uri=item_chunkset_uri,
+ error_records_uri=error_records_uri,
+ etl_sas=etl_sas,
+ limit=None,
+ insert_only=False,
+ num_processes=None,
+ ).to_args() + ["--log-uri", os.path.join(INGEST_ITEMS_COMMAND, "log.txt")]
result = self.run_command(ingest_cmd)
self.assertEqual(result.exit_code, 0)
--- first pass
+++ second pass
@@ -222,18 +222,21 @@
self.assertEqual(item_chunkset.all_chunks, set([success_chunk_id]))
# -- Ingest Items --
- ingest_cmd = IngestItemsArgs(
- item_chunkset_uri=item_chunkset_uri,
- error_records_uri=error_records_uri,
- etl_sas=etl_sas,
- limit=None,
- insert_only=False,
- num_processes=None,
- ).to_args() + ["--log-uri", os.path.join(INGEST_ITEMS_COMMAND, "log.txt")]
+ ingest_cmd = (
+ IngestItemsArgs(
+ item_chunkset_uri=item_chunkset_uri,
+ error_records_uri=error_records_uri,
+ etl_sas=etl_sas,
+ limit=None,
+ insert_only=False,
+ num_processes=None,
+ ).to_args()
+ + ["--log-uri", os.path.join(INGEST_ITEMS_COMMAND, "log.txt")]
+ )
result = self.run_command(ingest_cmd)
self.assertEqual(result.exit_code, 0)
Imma just dump this here:
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -15,17 +15,23 @@
== modcfg.loads("module a:\n\tb=c")
== [Module("a", [{"b": "c"}])]
)
-
def test_multiple_mod():
- assert modcfg.loads(
+ assert (
+ modcfg.loads(
"""
mod a:
b = c
mod b:
c = d
module c:
d = e
"""
- ) == [Module("a", [{"b": "c"}]),Module("b", [{"c": "d"}]),Module("c", [{"d": "e"}])]
+ )
+ == [
+ Module("a", [{"b": "c"}]),
+ Module("b", [{"c": "d"}]),
+ Module("c", [{"d": "e"}]),
+ ]
+ )
--- first pass
+++ second pass
@@ -16,22 +16,19 @@
== [Module("a", [{"b": "c"}])]
)
def test_multiple_mod():
- assert (
- modcfg.loads(
- """
+ assert modcfg.loads(
+ """
mod a:
b = c
mod b:
c = d
module c:
d = e
"""
- )
- == [
- Module("a", [{"b": "c"}]),
- Module("b", [{"c": "d"}]),
- Module("c", [{"d": "e"}]),
- ]
- )
+ ) == [
+ Module("a", [{"b": "c"}]),
+ Module("b", [{"c": "d"}]),
+ Module("c", [{"d": "e"}]),
+ ]
Found another case. Code:
assert x in ["12345678901234567890123456789012345678901234567890123456789012", "", ""], "%s" % ""
Error:
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -1 +1,5 @@
-assert x in ["12345678901234567890123456789012345678901234567890123456789012", "", ""], "%s" % ""
+assert x in [
+ "12345678901234567890123456789012345678901234567890123456789012",
+ "",
+ "",
+], ("%s" % "")
--- first pass
+++ second pass
@@ -1,5 +1,7 @@
assert x in [
"12345678901234567890123456789012345678901234567890123456789012",
"",
"",
-], ("%s" % "")
+], (
+ "%s" % ""
+)
Found an example:
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -1,3 +1,6 @@
se = func(
- self.entry[0], truncate_now, mask = self.entry[1], begin=truncate_now - timedelta(days=7)
+ self.entry[0],
+ truncate_now,
+ mask=self.entry[1],
+ begin=truncate_now - timedelta(days=7),
) / (self.WEEK * 60)
--- first pass
+++ second pass
@@ -1,6 +1,9 @@
-se = func(
- self.entry[0],
- truncate_now,
- mask=self.entry[1],
- begin=truncate_now - timedelta(days=7),
-) / (self.WEEK * 60)
+se = (
+ func(
+ self.entry[0],
+ truncate_now,
+ mask=self.entry[1],
+ begin=truncate_now - timedelta(days=7),
+ )
+ / (self.WEEK * 60)
+)
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -101,22 +101,61 @@
assert (
get_valid_moves([Card(rank, Suit.DIAMONDS)], player_cards, Suit.DIAMONDS)
== player_cards
)
+ for adut_suit in [Suit.DIAMONDS, Suit.SPADES, Suit.CLUBS]:
+ assert get_valid_moves(
+ [Card(Rank.JACK, Suit.HEARTS), Card(Rank.VIII, Suit.HEARTS)],
+ player_cards,
+ adut_suit,
+ ) == [Card(Rank.X, Suit.HEARTS), Card(Rank.QUEEN, Suit.HEARTS)]
+ assert get_valid_moves(
+ [Card(Rank.JACK, Suit.HEARTS), Card(Rank.KING, Suit.HEARTS)],
+ player_cards,
+ adut_suit,
+ ) == [Card(Rank.X, Suit.HEARTS)]
for adut_suit in [Suit.DIAMONDS, Suit.SPADES, Suit.CLUBS]:
- assert get_valid_moves([Card(Rank.JACK, Suit.HEARTS), Card(Rank.VIII, Suit.HEARTS)], player_cards, adut_suit) == [Card(Rank.X, Suit.HEARTS), Card(Rank.QUEEN, Suit.HEARTS)]
- assert get_valid_moves([Card(Rank.JACK, Suit.HEARTS), Card(Rank.KING, Suit.HEARTS)], player_cards, adut_suit) == [Card(Rank.X, Suit.HEARTS)]
+ assert get_valid_moves(
+ [Card(Rank.JACK, Suit.HEARTS), Card(Rank.JACK, adut_suit)],
+ player_cards,
+ adut_suit,
+ ) == [
+ Card(Rank.IX, Suit.HEARTS),
+ Card(Rank.X, Suit.HEARTS),
+ Card(Rank.QUEEN, Suit.HEARTS),
+ ]
- for adut_suit in [Suit.DIAMONDS, Suit.SPADES, Suit.CLUBS]:
- assert get_valid_moves([Card(Rank.JACK, Suit.HEARTS), Card(Rank.JACK, adut_suit)], player_cards, adut_suit) == [Card(Rank.IX, Suit.HEARTS), Card(Rank.X, Suit.HEARTS), Card(Rank.QUEEN, Suit.HEARTS)]
+ assert get_valid_moves(
+ [Card(Rank.QUEEN, Suit.DIAMONDS), Card(Rank.IX, Suit.DIAMONDS)],
+ player_cards,
+ Suit.SPADES,
+ ) == [
+ Card(Rank.QUEEN, Suit.SPADES),
+ Card(Rank.ACE, Suit.SPADES),
+ Card(Rank.VII, Suit.SPADES),
+ ]
- assert get_valid_moves([Card(Rank.QUEEN, Suit.DIAMONDS), Card(Rank.IX, Suit.DIAMONDS)], player_cards, Suit.SPADES) == [Card(Rank.QUEEN, Suit.SPADES), Card(Rank.ACE, Suit.SPADES), Card(Rank.VII, Suit.SPADES)]
-
- assert get_valid_moves([Card(Rank.QUEEN, Suit.DIAMONDS), Card(Rank.VIII, Suit.SPADES)], player_cards, Suit.SPADES) == [Card(Rank.QUEEN, Suit.SPADES), Card(Rank.ACE, Suit.SPADES)]
- assert get_valid_moves([Card(Rank.QUEEN, Suit.DIAMONDS), Card(Rank.X, Suit.SPADES)], player_cards, Suit.SPADES) == [Card(Rank.ACE, Suit.SPADES)]
- assert get_valid_moves([Card(Rank.QUEEN, Suit.DIAMONDS), Card(Rank.IX, Suit.SPADES)], player_cards, Suit.SPADES) == [Card(Rank.QUEEN, Suit.SPADES), Card(Rank.ACE, Suit.SPADES), Card(Rank.VII, Suit.SPADES)]
+ assert get_valid_moves(
+ [Card(Rank.QUEEN, Suit.DIAMONDS), Card(Rank.VIII, Suit.SPADES)],
+ player_cards,
+ Suit.SPADES,
+ ) == [Card(Rank.QUEEN, Suit.SPADES), Card(Rank.ACE, Suit.SPADES)]
+ assert get_valid_moves(
+ [Card(Rank.QUEEN, Suit.DIAMONDS), Card(Rank.X, Suit.SPADES)],
+ player_cards,
+ Suit.SPADES,
+ ) == [Card(Rank.ACE, Suit.SPADES)]
+ assert get_valid_moves(
+ [Card(Rank.QUEEN, Suit.DIAMONDS), Card(Rank.IX, Suit.SPADES)],
+ player_cards,
+ Suit.SPADES,
+ ) == [
+ Card(Rank.QUEEN, Suit.SPADES),
+ Card(Rank.ACE, Suit.SPADES),
+ Card(Rank.VII, Suit.SPADES),
+ ]
###
# TODO repeat with multiple cards
return
--- first pass
+++ second pass
@@ -102,20 +102,26 @@
get_valid_moves([Card(rank, Suit.DIAMONDS)], player_cards, Suit.DIAMONDS)
== player_cards
)
for adut_suit in [Suit.DIAMONDS, Suit.SPADES, Suit.CLUBS]:
- assert get_valid_moves(
- [Card(Rank.JACK, Suit.HEARTS), Card(Rank.VIII, Suit.HEARTS)],
- player_cards,
- adut_suit,
- ) == [Card(Rank.X, Suit.HEARTS), Card(Rank.QUEEN, Suit.HEARTS)]
- assert get_valid_moves(
- [Card(Rank.JACK, Suit.HEARTS), Card(Rank.KING, Suit.HEARTS)],
- player_cards,
- adut_suit,
- ) == [Card(Rank.X, Suit.HEARTS)]
+ assert (
+ get_valid_moves(
+ [Card(Rank.JACK, Suit.HEARTS), Card(Rank.VIII, Suit.HEARTS)],
+ player_cards,
+ adut_suit,
+ )
+ == [Card(Rank.X, Suit.HEARTS), Card(Rank.QUEEN, Suit.HEARTS)]
+ )
+ assert (
+ get_valid_moves(
+ [Card(Rank.JACK, Suit.HEARTS), Card(Rank.KING, Suit.HEARTS)],
+ player_cards,
+ adut_suit,
+ )
+ == [Card(Rank.X, Suit.HEARTS)]
+ )
for adut_suit in [Suit.DIAMONDS, Suit.SPADES, Suit.CLUBS]:
assert get_valid_moves(
[Card(Rank.JACK, Suit.HEARTS), Card(Rank.JACK, adut_suit)],
player_cards,
@@ -134,20 +140,26 @@
Card(Rank.QUEEN, Suit.SPADES),
Card(Rank.ACE, Suit.SPADES),
Card(Rank.VII, Suit.SPADES),
]
- assert get_valid_moves(
- [Card(Rank.QUEEN, Suit.DIAMONDS), Card(Rank.VIII, Suit.SPADES)],
- player_cards,
- Suit.SPADES,
- ) == [Card(Rank.QUEEN, Suit.SPADES), Card(Rank.ACE, Suit.SPADES)]
- assert get_valid_moves(
- [Card(Rank.QUEEN, Suit.DIAMONDS), Card(Rank.X, Suit.SPADES)],
- player_cards,
- Suit.SPADES,
- ) == [Card(Rank.ACE, Suit.SPADES)]
+ assert (
+ get_valid_moves(
+ [Card(Rank.QUEEN, Suit.DIAMONDS), Card(Rank.VIII, Suit.SPADES)],
+ player_cards,
+ Suit.SPADES,
+ )
+ == [Card(Rank.QUEEN, Suit.SPADES), Card(Rank.ACE, Suit.SPADES)]
+ )
+ assert (
+ get_valid_moves(
+ [Card(Rank.QUEEN, Suit.DIAMONDS), Card(Rank.X, Suit.SPADES)],
+ player_cards,
+ Suit.SPADES,
+ )
+ == [Card(Rank.ACE, Suit.SPADES)]
+ )
assert get_valid_moves(
[Card(Rank.QUEEN, Suit.DIAMONDS), Card(Rank.IX, Suit.SPADES)],
player_cards,
Suit.SPADES,
) == [
Hello,
I think I've encounter this bug too and following "If you find a case of this, please attach the generated log here so we can investigate. " here is the generated log (and the full file):
Mode(target_versions=set(), line_length=88, string_normalization=True, experimental_string_processing=False, is_pyi=False)
--- source
+++ first pass
@@ -10,11 +10,13 @@
assert parse_config(project) is None
def test_parse_config():
- with RepositoryForTests(TEST_DATA / "cube_doctor_yml/in_readme_example_config") as repo:
+ with RepositoryForTests(
+ TEST_DATA / "cube_doctor_yml/in_readme_example_config"
+ ) as repo:
project = FakeGitlabCube(repo)
assert parse_config(project) is not None
@@ -27,11 +29,13 @@
"add-yamllint",
"replace-set-attributes",
"update-licence-dates",
"auto-upgrade-dependencies",
]
- with RepositoryForTests(TEST_DATA / "cube_doctor_yml/in_readme_example_config") as repo:
+ with RepositoryForTests(
+ TEST_DATA / "cube_doctor_yml/in_readme_example_config"
+ ) as repo:
project = FakeGitlabCube(repo)
config = parse_config(project)
assert sorted(
command.command_name for (command, _) in commands_to_run(config)
@@ -41,11 +45,13 @@
"fix-README",
"replace-set-attributes",
"update-licence-dates",
"auto-upgrade-dependencies",
]
- with RepositoryForTests(TEST_DATA / "cube_doctor_yml/in_less_commands_than_in_readme_example_config") as repo:
+ with RepositoryForTests(
+ TEST_DATA / "cube_doctor_yml/in_less_commands_than_in_readme_example_config"
+ ) as repo:
project = FakeGitlabCube(repo)
config = parse_config(project)
assert sorted(
command.command_name for (command, _) in commands_to_run(config)
@@ -59,18 +65,28 @@
("add-pypi-publish", {"merge_when_pipeline_succeeds": False}),
("add-deb-publish", {"merge_when_pipeline_succeeds": False}),
("add-yamllint", {"merge_when_pipeline_succeeds": False}),
("replace-set-attributes", {"merge_when_pipeline_succeeds": False}),
("update-licence-dates", {"merge_when_pipeline_succeeds": False}),
- ("auto-upgrade-dependencies", {"merge_when_pipeline_succeeds": False, "one-by-one": True, "all-at-once": True}),
+ (
+ "auto-upgrade-dependencies",
+ {
+ "merge_when_pipeline_succeeds": False,
+ "one-by-one": True,
+ "all-at-once": True,
+ },
+ ),
]
- with RepositoryForTests(TEST_DATA / "cube_doctor_yml/in_readme_example_config") as repo:
+ with RepositoryForTests(
+ TEST_DATA / "cube_doctor_yml/in_readme_example_config"
+ ) as repo:
project = FakeGitlabCube(repo)
config = parse_config(project)
assert sorted(
- [(command.command_name, y) for (command, y) in commands_to_run(config)], key=lambda x: x[0]
+ [(command.command_name, y) for (command, y) in commands_to_run(config)],
+ key=lambda x: x[0],
) == sorted(expected_commands_and_config, key=lambda x: x[0])
def test_commands_to_run_config_default():
expected_commands_and_config = [
@@ -79,18 +95,28 @@
("add-pypi-publish", {"merge_when_pipeline_succeeds": True}),
("add-deb-publish", {"merge_when_pipeline_succeeds": True}),
("add-yamllint", {"merge_when_pipeline_succeeds": True}),
("replace-set-attributes", {"merge_when_pipeline_succeeds": True}),
("update-licence-dates", {"merge_when_pipeline_succeeds": True}),
- ("auto-upgrade-dependencies", {"merge_when_pipeline_succeeds": True, "one-by-one": True, "all-at-once": True}),
+ (
+ "auto-upgrade-dependencies",
+ {
+ "merge_when_pipeline_succeeds": True,
+ "one-by-one": True,
+ "all-at-once": True,
+ },
+ ),
]
- with RepositoryForTests(TEST_DATA / "cube_doctor_yml/in_readme_example_config_default") as repo:
+ with RepositoryForTests(
+ TEST_DATA / "cube_doctor_yml/in_readme_example_config_default"
+ ) as repo:
project = FakeGitlabCube(repo)
config = parse_config(project)
assert sorted(
- [(command.command_name, y) for (command, y) in commands_to_run(config)], key=lambda x: x[0]
+ [(command.command_name, y) for (command, y) in commands_to_run(config)],
+ key=lambda x: x[0],
) == sorted(expected_commands_and_config, key=lambda x: x[0])
def test_commands_to_run_config_specific():
expected_commands_and_config = [
@@ -99,14 +125,24 @@
("add-pypi-publish", {"merge_when_pipeline_succeeds": True}),
("add-deb-publish", {"merge_when_pipeline_succeeds": True}),
("add-yamllint", {"merge_when_pipeline_succeeds": True}),
("replace-set-attributes", {"merge_when_pipeline_succeeds": True}),
("update-licence-dates", {"merge_when_pipeline_succeeds": True}),
- ("auto-upgrade-dependencies", {"merge_when_pipeline_succeeds": False, "one-by-one": True, "all-at-once": True}),
+ (
+ "auto-upgrade-dependencies",
+ {
+ "merge_when_pipeline_succeeds": False,
+ "one-by-one": True,
+ "all-at-once": True,
+ },
+ ),
]
- with RepositoryForTests(TEST_DATA / "cube_doctor_yml/in_readme_example_config_specific") as repo:
+ with RepositoryForTests(
+ TEST_DATA / "cube_doctor_yml/in_readme_example_config_specific"
+ ) as repo:
project = FakeGitlabCube(repo)
config = parse_config(project)
assert sorted(
- [(command.command_name, y) for (command, y) in commands_to_run(config)], key=lambda x: x[0]
+ [(command.command_name, y) for (command, y) in commands_to_run(config)],
+ key=lambda x: x[0],
) == sorted(expected_commands_and_config, key=lambda x: x[0])
--- first pass
+++ second pass
@@ -80,14 +80,17 @@
TEST_DATA / "cube_doctor_yml/in_readme_example_config"
) as repo:
project = FakeGitlabCube(repo)
config = parse_config(project)
- assert sorted(
- [(command.command_name, y) for (command, y) in commands_to_run(config)],
- key=lambda x: x[0],
- ) == sorted(expected_commands_and_config, key=lambda x: x[0])
+ assert (
+ sorted(
+ [(command.command_name, y) for (command, y) in commands_to_run(config)],
+ key=lambda x: x[0],
+ )
+ == sorted(expected_commands_and_config, key=lambda x: x[0])
+ )
def test_commands_to_run_config_default():
expected_commands_and_config = [
("fix-README", {"merge_when_pipeline_succeeds": True, "extension": "rst"}),
@@ -110,14 +113,17 @@
TEST_DATA / "cube_doctor_yml/in_readme_example_config_default"
) as repo:
project = FakeGitlabCube(repo)
config = parse_config(project)
- assert sorted(
- [(command.command_name, y) for (command, y) in commands_to_run(config)],
- key=lambda x: x[0],
- ) == sorted(expected_commands_and_config, key=lambda x: x[0])
+ assert (
+ sorted(
+ [(command.command_name, y) for (command, y) in commands_to_run(config)],
+ key=lambda x: x[0],
+ )
+ == sorted(expected_commands_and_config, key=lambda x: x[0])
+ )
def test_commands_to_run_config_specific():
expected_commands_and_config = [
("fix-README", {"merge_when_pipeline_succeeds": True, "extension": "rst"}),
@@ -140,9 +146,12 @@
TEST_DATA / "cube_doctor_yml/in_readme_example_config_specific"
) as repo:
project = FakeGitlabCube(repo)
config = parse_config(project)
- assert sorted(
- [(command.command_name, y) for (command, y) in commands_to_run(config)],
- key=lambda x: x[0],
- ) == sorted(expected_commands_and_config, key=lambda x: x[0])
+ assert (
+ sorted(
+ [(command.command_name, y) for (command, y) in commands_to_run(config)],
+ key=lambda x: x[0],
+ )
+ == sorted(expected_commands_and_config, key=lambda x: x[0])
+ )
Kind regards,
Fixed by #2126.
This is a rare problem that we're currently investigating. The most common case of it has to do with a combination of magic trailing commas and optional parentheses. Long story short, there's this behavior:
The expected behavior is that there should be no difference between the first formatting and the second formatting. In practice Black sometimes chooses for or against optional parentheses differently depending on whether the line should be exploded or not. This is what needs fixing.
Workaround
We're working on fixing this, until then, format the file twice with
--fast
, it will keep its formatting moving forward.Call To Action
If you find a case of this, please attach the generated log here so we can investigate. We've already added three identifying examples of this as expected failures to https://github.com/psf/black/pull/1627/commits/25206d8cc6e98143f0b10bcbe9e8b41b8b543abe.
Finally, if you're interested in debugging this yourself, look for
should_explode
inif
statements in the Black codebase. Those are the decisions that lead to unstable formatting.