python / cpython

The Python programming language
https://www.python.org
Other
62.87k stars 30.11k forks source link

PEP 701 – Syntactic formalization of f-strings #102856

Closed pablogsal closed 1 year ago

pablogsal commented 1 year ago

Linked PRs

pablogsal commented 1 year ago

CC: @lysnikolaou @isidentical @mgmacias95

pablogsal commented 1 year ago

See this for the latest report on errors from @isidentical

pablogsal commented 1 year ago

Draft PR for the C tokenizer up: https://github.com/python/cpython/pull/102855

pablogsal commented 1 year ago

Things for the cleanup of https://github.com/python/cpython/pull/102855:

pablogsal commented 1 year ago

Ok with https://github.com/python/cpython/pull/102855 we have the following failing tests:

Most of these are updating error messages, line numbers and other stuff but some may have actual bugs so we should check them. Please, mention which ones are you working on so we don't clash with one another.

mgmacias95 commented 1 year ago

Working on test_tokenize

Eclips4 commented 1 year ago

Hello, Pablo! Can I get work on test_ast? Recently I sent some PR's about this file (for example, #102797). So, I have some experience in that =)

ramvikrams commented 1 year ago

I can work with test_type_comments and test_unparse.

pablogsal commented 1 year ago

@Eclips4 @ramvikrams wonderful! Just make PRs against my fork!

Report here or ping any of us if you find something that could be a bug (don't just fix the tests blindly because there may be bugs lurking).

pablogsal commented 1 year ago

@lysnikolaou can you work on cleaning up the grammar + the actions?

@isidentical can you work on cleaning up some of the tokenizer layers? (This is quite a lot so we can probably work together here).

Eclips4 commented 1 year ago

@pablogsal About test_ast.py Seems thats like there only a one test will be failed, and how I undestand, that's a bug: https://github.com/python/cpython/blob/7f760c2fca3dc5f46a518f5b7622783039bc8b7b/Lib/test/test_ast.py#L779-L780

I think, there's two solutions:

  1. Remove this test, because support of python3.7 will be ended soon.
  2. Now errors raised by tokenizer.c instead of string_parser.c, so as I understand, we should change python_gram, is it right? ( We need access to feature_version, which in tokenizer inaccessible )
pablogsal commented 1 year ago

2. Now errors raised by tokenizer.c instead of string_parser.c, so as I understand, we should change python_gram, is it right? ( We need access to feature_version, which in tokenizer inaccessible )

Probably we can do this but on the other hand I would prefer to not overcomplicate this so I think (1) is better

lysnikolaou commented 1 year ago

@lysnikolaou can you work on cleaning up the grammar + the actions?

Will do!

Eclips4 commented 1 year ago

Also, I can take a look at test_cmd_line_script. Seems easy.

pablogsal commented 1 year ago

Also, I can take a look at test_cmd_line_script. Seems easy.

All yours!

CharlieZhao95 commented 1 year ago

I found that no one has claimed test_eof yet, so I made some work. :) Failed test case: test_eof.test_eof_with_line_continuation

I looked at its commit history. This test case is a regression test for crash, so it seems like a good choice to keep the case and update the error message directly. https://github.com/python/cpython/blob/72186aa637bc88cd5f5e234803af64acab25994c/Lib/test/test_eof.py#L39-L40

Update unexpected EOF while parsing (<string>, line 1) to (unicode error) 'unicodeescape' codec can't decode bytes in position 0-1: truncated \xXX escape (<string>, line 1)

ramvikrams commented 1 year ago

@pablogsal in test_type_comments we have not used any f-strings.

pablogsal commented 1 year ago

@pablogsal in test_type_comments we have not used any f-strings.

The one failing there is this problem:

======================================================================
FAIL: test_fstring (test.test_type_comments.TypeCommentTests.test_fstring)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/test/test_type_comments.py", line 275, in test_fstring
    for tree in self.parse_all(fstring, minver=6):
  File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/test/test_type_comments.py", line 239, in parse_all
    with self.assertRaisesRegex(SyntaxError, expected_regex,
AssertionError: SyntaxError not raised : feature_version=(3, 4)

----------------------------------------------------------------------

which I think is a feature version problem.

Eclips4 commented 1 year ago

@pablogsal in test_type_comments we have not used any f-strings.

The one failing there is this problem:

======================================================================
FAIL: test_fstring (test.test_type_comments.TypeCommentTests.test_fstring)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/test/test_type_comments.py", line 275, in test_fstring
    for tree in self.parse_all(fstring, minver=6):
  File "/home/runner/work/cpython/cpython-ro-srcdir/Lib/test/test_type_comments.py", line 239, in parse_all
    with self.assertRaisesRegex(SyntaxError, expected_regex,
AssertionError: SyntaxError not raised : feature_version=(3, 4)

----------------------------------------------------------------------

which I think is a feature version problem.

https://github.com/python/cpython/blob/46957091433bfa097d7ea19b177bf42a52412f2d/Lib/test/test_type_comments.py#L223 We can just change this to 6, and this test will be pass. I don't research this problem, but this solution looks like the simplest. However... supporting syntax of python3.4 && 3.5 looks cinda strange.

pablogsal commented 1 year ago

We can just change this to 6, and this test will be pass. I don't research this problem, but this solution looks like the simplest. However... supporting syntax of python3.4 && 3.5 looks cinda strange.

I don't think that will work because we are not doing version checking anymore. See previous comments. The fix is probably to no pass feature_version.

sunmy2019 commented 1 year ago

Looks like no one is analyzing test_exceptions. I will look into it these two days.

4 platforms seem to have met the same problem here.

pablogsal commented 1 year ago

@isidentical @lysnikolaou i have pushed some rules for error messages, please take a look and complete them with more if you have some spare cycles. With these the failures in test_fstring have decreased notably

isidentical commented 1 year ago

I can confirm that the total number of failures has been decrased from 88 to 63. I'll try to see what are the most high impact ones and submit a PR to clear them.

isidentical commented 1 year ago

If anyone intends to work on any of the remaining tasks in test_fstring, please double check with this PR (https://github.com/pablogsal/cpython/pull/52) since it brings down the total failures to 30 with some explanations/required decisions for the rest.

sunmy2019 commented 1 year ago

After looking into the failure in test_exceptions.

check(b'Python = "\xcf\xb3\xf2\xee\xed" +', 1, 18)

Old parser and the new parser raises the same exception (UnicodeDecodeError), but with different col_offset. This is because it was raised by the wrong token.

I would consider it a bug in the old parser. Just as this comment mentions, https://github.com/python/cpython/blob/64cb1a4f0f0bc733a33ad7a6520e749ca1cdd43f/Parser/string_parser.c#L38-L48

the error token was not correctly set in the old parser.

Maybe we should open an issue for the old parse? But the possible fixing might be error-prone, since we might need to keep track of every possible code path.

As for the new parser, I think a change in the test case would be fine.

sunmy2019 commented 1 year ago

I am working on an PR to fix test_unparse this weekend.

_PyPegen_concatenate_strings did not implement concatenating empty Constant with FormattedValue, resulting unparse failure.

sunmy2019 commented 1 year ago

Hi, I got some bad news.

I have been testing against memory leaks with ./python -m test -j $(nproc) -R : ~30% of the tests failed on current head 270b66168

For example,

0:01:01 load avg: 13.77 [157/433/42] test_unittest failed (reference leak)
beginning 9 repetitions
test_unittest leaked [89, 89, 89, 89] references, sum=356
test_unittest leaked [90, 89, 89, 89] memory blocks, sum=357
.......
0:01:16 load avg: 15.00 [185/433/49] test_inspect failed (reference leak)
beginning 9 repetitions
test_inspect leaked [429, 429, 429, 429] references, sum=1716
test_inspect leaked [318, 318, 318, 317] memory blocks, sum=1271

These references are most likely to leak during the compilation (such as using import, using compile/exec/eval, or using ast.parse)

We might need to look into that.


Update: Memory Leakage fixed by commit https://github.com/pablogsal/cpython/pull/63/commits/d8b12e24bcad12a7830cf3e2758bea906a1edf41

The root cause is that someone forgot to use _PyArena_AddPyObject in 3 places.

This is very tricky because _PyArena_AddPyObject was scattered in many subroutines. Sometimes you should add _PyArena_AddPyObject, but sometimes you should not (add will cause a negative ref count).

Just like an old saying, managing memory by hand is so much pain, and also error-prone. This check can be done, by analyzing the PyObject*s registered to the arena by the time the AST was created, but that is a totally different story.

pablogsal commented 1 year ago

Only 12 test left in test_fstring and we are ready to go!

lysnikolaou commented 1 year ago

I'll have a look at them today.

sunmy2019 commented 1 year ago

I think we can make it before beta 1. Formalizing fstring error messages (see https://github.com/pablogsal/cpython/pull/52#discussion_r1148587385) can be left to the end, since it is non-functional.

pablogsal commented 1 year ago

I pushed a commit and we are now down to 8

pablogsal commented 1 year ago

I think we can make it before beta 1. Formalizing fstring error messages (see pablogsal#52 (comment)) can be left to the end, since it is non-functional.

Yes and no: there may be some design consequences and I am not a fan of deactivating tests to reactivate them later.

lysnikolaou commented 1 year ago

There's a small discrepancy around the conversion character between our version and the string parser. The string parser does not allow spaces before or after the conversion character. For example:

>>> f"{'3'!r : >10s}"
  File "<stdin>", line 1
    f"{'3'!r : >10s}"
                     ^
SyntaxError: f-string: expecting '}'

Our version, while not allowing for spaces between ! and the character, allows spaces after.

>>> f"{'3'!r : >10s}"
"       '3'"

What do you think about this? I feel okay about it, although it isn't really consistent with our decision on spaces after the ! character.

pablogsal commented 1 year ago

What do you think about this? I feel okay about it, although it isn't really consistent with our decision on spaces after the ! character.

I feel that this is fine, and would be very difficult to disallow to be honest because these are different tokens anyway. It won't affect existing code and I don't see any reason to forbid it going foward.

pablogsal commented 1 year ago

7 tests to go thanks to @lysnikolaou ❤️

lysnikolaou commented 1 year ago

With pablogsal/cpython#65, we only have two errors to go.

======================================================================
FAIL: test_format_specifier_expressions (test.test_fstring.TestCase.test_format_specifier_expressions) (str='f\'{"s"!r{":10"}}\'')
----------------------------------------------------------------------
  File "<string>", line 1
    f'{"s"!r{":10"}}'
            ^
SyntaxError: f-string: expecting '}'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/lysnikolaou/repos/python/cpython/Lib/test/test_fstring.py", line 32, in assertAllRaise
    with self.assertRaisesRegex(exception_type, regex):
AssertionError: "f-string: invalid conversion character 'r{"': expected 's', 'r', or 'a'" does not match "f-string: expecting '}' (<string>, line 1)"

======================================================================
FAIL: test_format_specifier_expressions (test.test_fstring.TestCase.test_format_specifier_expressions) (str="f'{4:{/5}}'")
----------------------------------------------------------------------
  File "<string>", line 1
    f'{4:{/5}}'
          ^
SyntaxError: f-string: expecting '}'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/lysnikolaou/repos/python/cpython/Lib/test/test_fstring.py", line 32, in assertAllRaise
    with self.assertRaisesRegex(exception_type, regex):
AssertionError: "f-string: invalid syntax" does not match "f-string: expecting '}' (<string>, line 1)"

These are hard to handle correctly, if we have the invalid_replacement_field rule stay the same to catch those expected '}' cases. There are the following 3 options, I guess.

  1. Take more time to figure out whether we can find a smart way to differentiate between these things (can't see something since yesterday)
  2. Be okay with sometimes emitting a expecting '}' error, while there are other syntax errors within the f-string (keeping invalid_replacement_field as-is).
  3. Be okay with sometimes emitting a invalid syntax error, when we could have done a better error (probably making invalid_replacement_field require both a conversion character and a valid format spec).

Thoughts?

sunmy2019 commented 1 year ago

Catching all errors seems unrealistic. But I think we do have an opportunity to improve.

By changing the invalid_replacement_field, we will have a better complain message about what to expect, and we can capture the syntax error from the field (yield_expr | star_expressions).

invalid_replacement_field:
    | '{' a='=' { RAISE_SYNTAX_ERROR_KNOWN_LOCATION(a, "f-string: expression required before '='") }
    | '{' a=':' { RAISE_SYNTAX_ERROR_KNOWN_LOCATION(a, "f-string: expression required before ':'") }
    | '{' a='!' { RAISE_SYNTAX_ERROR_KNOWN_LOCATION(a, "f-string: expression required before '!'") }
    | '{' a='}' { RAISE_SYNTAX_ERROR_KNOWN_LOCATION(a, "f-string: empty expression not allowed") }
    | '{' (yield_expr | star_expressions) !('=' | '!' | ':' | '}') {
        PyErr_Occurred() ? NULL : RAISE_SYNTAX_ERROR("f-string: expecting '=', or '!', or ':', or '}'")
    }
    | '{' (yield_expr | star_expressions) "=" !('!' | ':' | '}') {
        PyErr_Occurred() ? NULL : RAISE_SYNTAX_ERROR("f-string: expecting '!', or ':', or '}'")
    }
    | '{' (yield_expr | star_expressions) "="? invalid_conversion_character
    | '{' (yield_expr | star_expressions) "="? ['!' NAME] !(':' | '}') {
        PyErr_Occurred() ? NULL : RAISE_SYNTAX_ERROR("f-string: expecting ':' or '}'")
    }
    | '{' (yield_expr | star_expressions) "="? ['!' NAME] [':' fstring_format_spec*] !'}' {
        PyErr_Occurred() ? NULL : RAISE_SYNTAX_ERROR("f-string: expecting '}'")
    }
sunmy2019 commented 1 year ago

With this new grammer,

  File "a.py", line 1
    f'{"s"!r{":10"}}'
            ^
SyntaxError: f-string: expecting ':' or '}'

much better I think.

lysnikolaou commented 1 year ago

Totally agree. Good idea! Will you open a PR?

sunmy2019 commented 1 year ago

Sure

sunmy2019 commented 1 year ago

Hi, there. Will anyone take a look at https://github.com/pablogsal/cpython/pull/67?

pablogsal commented 1 year ago

We are almost there, seems that we still have a small failure:

FAIL: test_random_files (test.test_tokenize.TestRoundtrip.test_random_files)

@isidentical this may be that we lack some changes in the untokenize function

sunmy2019 commented 1 year ago

@pablogsal We lack changes in both the tokenize and the untokenize function. It cannot recognize the new f-string grammar. See https://github.com/pablogsal/cpython/pull/67#issuecomment-1505840208

pablogsal commented 1 year ago

@pablogsal We lack changes in both the tokenize and the untokenize function. It cannot recognize the new f-string grammar. See pablogsal#67 (comment)

We should be ok excluding the files that use the new funky syntax until we have the new tokenize implementation

pablogsal commented 1 year ago

Seems that we have a failing tests in the buildbots:

https://buildbot.python.org/all/#/builders/802/builds/760

lysnikolaou commented 1 year ago

Looking into the failure.

pablogsal commented 1 year ago

This code is kind of wrong:

https://github.com/python/cpython/blob/6be7aee18c5b8e639103df951d0d277f4b46f902/Parser/tokenizer.c#L2513-L2517

This fails when the f-string closes in a different line due to the expression taking multiple lines:

>>> f"blech{
... 1
... +
... 1
... }
... "

Or for example, if you do

>>> f"123{1}
...

In both cases the values of tok->cur and tok->line_start are garbage. I suspect the problem is that we are not updating current_tok->f_string_start and current_tok->f_string_multi_line_start if the f-string expression takes multiple lines and is not triple quotes.

To check this, please compile with --with-address-sanitizer

pablogsal commented 1 year ago

@cmaureir is going to work on allowing comments as per the specification.

lysnikolaou commented 1 year ago

@mgmacias95 @pablogsal How's the work on the tokenize module going? Are we going to be able to add that until the beta? Is it going to be considered okay to add it later as well?

mgmacias95 commented 1 year ago

hi @lysnikolaou, I just created a draft PR (https://github.com/python/cpython/pull/104323) please take a look.