python / cpython

The Python programming language
https://www.python.org
Other
62.75k stars 30.08k forks source link

"Encoding" detected in non-comment lines #63073

Closed b80175c4-fe9a-4de3-93c0-8d22673eb354 closed 9 years ago

b80175c4-fe9a-4de3-93c0-8d22673eb354 commented 11 years ago
BPO 18873
Nosy @loewis, @birkenfeld, @terryjreedy, @kbkaiser, @benjaminp, @serwy, @bitdancer, @meadori, @serhiy-storchaka, @pib
Files
  • tokenizer.patch
  • detect_encoding_in_comments_only.patch
  • pep0263_regex.diff
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields: ```python assignee = 'https://github.com/serhiy-storchaka' closed_at = created_at = labels = ['easy', 'expert-2to3', 'type-bug', 'library'] title = '"Encoding" detected in non-comment lines' updated_at = user = 'https://github.com/pib' ``` bugs.python.org fields: ```python activity = actor = 'serhiy.storchaka' assignee = 'serhiy.storchaka' closed = True closed_date = closer = 'serhiy.storchaka' components = ['Demos and Tools', 'Library (Lib)', '2to3 (2.x to 3.x conversion tool)'] creation = creator = 'Paul.Bonser' dependencies = [] files = ['31628', '31645', '31655'] hgrepos = [] issue_num = 18873 keywords = ['patch', 'easy'] message_count = 15.0 messages = ['196435', '197081', '197117', '197165', '197166', '197196', '197201', '197944', '197946', '197948', '197956', '197964', '197965', '228299', '229147'] nosy_count = 12.0 nosy_names = ['loewis', 'georg.brandl', 'terry.reedy', 'kbk', 'benjamin.peterson', 'roger.serwy', 'r.david.murray', 'meador.inge', 'python-dev', 'serhiy.storchaka', 'Paul.Bonser', 'armicron'] pr_nums = [] priority = 'normal' resolution = 'fixed' stage = 'resolved' status = 'closed' superseder = None type = 'behavior' url = 'https://bugs.python.org/issue18873' versions = ['Python 2.7', 'Python 3.4', 'Python 3.5'] ```

    b80175c4-fe9a-4de3-93c0-8d22673eb354 commented 11 years ago

    lib2to3.pgen2.tokenize:detect_encoding looks for the regex "coding[:=]\s*([-\w.]+)" in the first two lines of the file without first checking if they are comment lines.

    You can get 2to3 to fail with "SyntaxError: unknown encoding: 0" with a single line file:

        coding=0

    A simple fix would be to check that the line is a comment before trying to look up the encoding from that line.

    18c9dbae-c23f-4705-a27b-cf44fb48a3c2 commented 11 years ago

    -cookie_re = re.compile("coding[:=]\s([-\w.]+)") +cookie_re = re.compile("#[^\r\n]coding[:=]\s*([-\w.]+)")

    Regex matches only if the encoding expression is preceded by a comment.

    serhiy-storchaka commented 11 years ago

    It will fail on:

    "#coding=0"

    I'm wondering why findall() is used to match this regexp.

    serhiy-storchaka commented 11 years ago

    The tokenize module, 2to3, IDLE, and the Tools/scripts/findnocoding.py script affected by this bug. Proposed patch fixes this in all places and adds tests for tokenize and 2to3.

    serhiy-storchaka commented 11 years ago

    And here is a patch which fixes the regular expression in PEP-263.

    terryjreedy commented 11 years ago

    Nasty bug. Running a file with 'coding=0', a quite legitimate assignment statement, causes Idle to close, with LookupError, leading to SyntaxError, reported on the console if there is one ('crash' otherwise). (Idle closing is a separate problem, with an issue, from the misinterpretation of 'coding'.)

    Loading such a file works with a warning that should not be there.

    Adding # leads to "SyntaxError: unknown encoding" in a message box, without closing Idle. I presume this is to be expected and is proper. There is also a warning on loading.

    The code patch adds '^[ \t\f]' to the re. \f = FormFeed? Should that really be there? The PEP patch instead adds '^[ \t\v]', \v= VerticalTab? Same question, and why the difference?

    Your other changes to IOBinding.coding_spec look correct and fix a couple of bugs in the function (searching all lines for the coding cookie, mangling a line without a line end).

    Someone else should review the other code changes.

    serhiy-storchaka commented 11 years ago

    The code patch adds '^[ \t\f]' to the re. \f = FormFeed? Should that really be there? The PEP patch instead adds '^[ \t\v]', \v= VerticalTab? Same question, and why the difference?

    Good catch. I missed in the PEP patch, it should be '\f' ('\014') in all cases.

    Yes, it should be. It corresponds to the code in Parser/tokenizer.c.

    terryjreedy commented 11 years ago

    One of the problem with encoding recognition is that the same logic is more-or-less reproduced multiple places, so any fix needs to be applied multiple places. From the detect_encoding_in_comments_only.patch: Lib/idlelib/IOBinding.py Lib/lib2to3/pgen2/tokenize.py Lib/tokenize.py Tools/scripts/findnocoding.py Any fix for issues *18960 and *18961 may also need multiple applications.

    If there is not now, it would be nice if there were just one python-coded function in Lib/tokenize.py that could be imported and used by the other python code. (I was going to suggest exposing the function in tokenize.c, but I believe the point of tokenize.py is to not be dependent on CPython.)

    I believe the Idle support for \r became obsolete when support for MacOS9 was dropped in 2.4. I notice that it is not part of io universal newline support.

    1762cc99-3127-4a62-9baf-30c3d0f51ef7 commented 11 years ago

    New changeset 2dfe8262093c by Serhiy Storchaka in branch '3.3': Issue bpo-18873: The tokenize module, IDLE, 2to3, and the findnocoding.py script http://hg.python.org/cpython/rev/2dfe8262093c

    New changeset 6b747ad4a99a by Serhiy Storchaka in branch 'default': Issue bpo-18873: The tokenize module, IDLE, 2to3, and the findnocoding.py script http://hg.python.org/cpython/rev/6b747ad4a99a

    New changeset 3d46ef0c62c5 by Serhiy Storchaka in branch '2.7': Issue bpo-18873: IDLE, 2to3, and the findnocoding.py script now detect Python http://hg.python.org/cpython/rev/3d46ef0c62c5

    serhiy-storchaka commented 11 years ago

    If there is not now, it would be nice if there were just one python-coded function in Lib/tokenize.py that could be imported and used by the other python code.

    Agree. But look how many tokenize issues are opened around.

    Thank you for your report Paul.

    I left PEP-263 not fixed yet. Perhaps it needs rewording (especially in the light of other issues, such as bpo-18960 and bpo-18961).

    bitdancer commented 11 years ago

    This appears to be resulting in buildbot lib2to3 test failures. ex:

    http://buildbot.python.org/all/builders/x86%20Ubuntu%20Shared%202.7/builds/2319/steps/test/logs/stdio

    http://buildbot.python.org/all/builders/PPC64%20PowerLinux%202.7/builds/206/steps/test/logs/stdio

    1762cc99-3127-4a62-9baf-30c3d0f51ef7 commented 11 years ago

    New changeset f16855d6d4e1 by Serhiy Storchaka in branch '2.7': Remove the use of non-existing re.ASCII. http://hg.python.org/cpython/rev/f16855d6d4e1

    serhiy-storchaka commented 11 years ago

    Thanks, David.

    terryjreedy commented 10 years ago

    This looks like it could be closed. We normally do not patch PEPs after they are implemented. Does a corrected version of something in PEP-263 need to be added to the ref manual?

    serhiy-storchaka commented 9 years ago

    I haven't fixed all bugs in handling encoding cookie yet (there are separate issues). Well, this issue can be closed, I'll open new issue about the PEP when will be needed. The PEP should be corrected because it affects how other Python implementations and other tools handle this.