bblfsh / sdk

Babelfish driver SDK
GNU General Public License v3.0
23 stars 27 forks source link

Comments including utf8 characters are not parsed correctly #433

Closed smacker closed 4 years ago

smacker commented 4 years ago

Go client:

package main

import (
    "fmt"
    "io/ioutil"
    "net/http"

    bblfsh "github.com/bblfsh/go-client/v4"
    "github.com/bblfsh/go-client/v4/tools"
)

const fileURL = "https://github.com/pytorch/tutorials/raw/master/advanced_source/neural_style_tutorial.py"

func main() {
    httpRes, err := http.Get(fileURL)
    if err != nil {
        panic(err)
    }
    if httpRes.StatusCode != http.StatusOK {
        panic("not 200 status")
    }
    content, err := ioutil.ReadAll(httpRes.Body)
    if err != nil {
        panic(err)
    }

    client, err := bblfsh.NewClient("localhost:9432")
    if err != nil {
        panic(err)
    }

    res, _, err := client.NewParseRequest().Language("python").Content(string(content)).UAST()
    if err != nil {
        panic(err)
    }

    it, _ := tools.Filter(res, "//uast:FunctionGroup")
    for it.Next() {
        fmt.Println("node")
    }
}

works without any errors.

Python client:

import bblfsh
import requests

FILE_URL = "https://github.com/pytorch/tutorials/raw/master/advanced_source/neural_style_tutorial.py"

def main():
    content = requests.get(FILE_URL).text

    client = bblfsh.BblfshClient("localhost:9432")
    ctx = client.parse(FILE_URL.split("/")[-1], None, content)
    it = ctx.filter("//uast:FunctionGroup")
    for node in it:
        print("node")

main()

results in:

node
node
node
node
node
node
node
node
node
node
node
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte

The above exception was the direct cause of the following exception:

SystemError: <class 'UnicodeDecodeError'> returned a result with an error set

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "reproduce.py", line 17, in <module>
    main()
  File "reproduce.py", line 13, in main
    for node in it:
  File "/Users/smacker/tmp/bblfsh_debug/venv/lib/python3.6/site-packages/bblfsh/node_iterator.py", line 29, in __next__
    self._last_node = Node(node_ext=next_node, ctx=self.ctx)
  File "/Users/smacker/tmp/bblfsh_debug/venv/lib/python3.6/site-packages/bblfsh/node.py", line 141, in __init__
    self.internal_node = node_ext.load()
SystemError: <built-in method load of pyuast.NodeExt object at 0x1039a47b0> returned a result with an error set
ncordon commented 4 years ago

It is not the filter really, the parse itself fails. If we do:

import bblfsh
import requests

FILE_URL = "https://github.com/pytorch/tutorials/raw/master/advanced_source/neural_style_tutorial.py"

content = requests.get(FILE_URL).text
client = bblfsh.BblfshClient("localhost:9432")
ctx = client.parse(FILE_URL.split("/")[-1], None, content)
ctx

we get the same error

smacker commented 4 years ago

True. Though I also see:

ctx = decode(raw_uast, format=0) # no error
print(ctx) # error
ncordon commented 4 years ago

What is happening: \x80... is not unicode (an unicode cannot start by that, as documented in the Python-docs, andb'\x80abc'.decode("utf-8", "strict") would fail, for example).

Now the file contains the string “closure”, and the final gets translated into b'\xe2\x80\x9d'. When we parse a Python file unicode.py which only contains this:

# a “closure”

with bblfsh-cli -m=semantic ./unicode.py, we get the following tree:

{ '@type': "python:Module",
   '@role': [File, Module],
   '@pos': { '@type': "uast:Positions",
   },
   body: [],
   'noops_remainder': { '@type': "python:RemainderNoops",
      '@role': [Noop],
      '@pos': { '@type': "uast:Positions",
         start: { '@type': "uast:Position",
            offset: 0,
            line: 1,
            col: 1,
         },
         end: { '@type': "uast:Position",
            offset: 0,
            line: 1,
            col: 1,
         },
      },
      lines: [
         { '@type': "uast:Comment",
            '@role': [Noop],
            '@pos': { '@type': "uast:Positions",
               start: { '@type': "uast:Position",
                  offset: 0,
                  line: 1,
                  col: 1,
               },
            },
            Block: false,
            Prefix: " ",
            Suffix: "\x80\x9d\n",
            Tab: "",
            Text: "a “closure\xe2",
         },
      ],
   },
}

And the string in Suffix: "\x80\x9d\n" is specifically the part we cannot create (since it starts by x80). It seems we are not parsing utf-8 correctly when generating the Suffix and Preffix, because we are splitting the \xe2\x80\x9d as if they were several characters. It's a really nice catch @smacker (and really bizarre too :rofl: )

creachadair commented 4 years ago

In general, Babelfish doesn't work with encodings other than UTF-8. I think we should probably at the least have better error diagnostics for that case. (We might already have an issue for that, but I didn't find it in a cursory scan; I'll look more closely tomorrow).

Handling other encodings is a fairly deep design issue (it impacts not only the API, but the round-tripping of transformations, location as observed by the client vs. the parser, and so on), so we're not likely to do that. As a workaround, though, if the client can transcode the file into UTF-8 before sending it, that should at least ensure consistent results.

smacker commented 4 years ago

The problem isn't that bblfsh can't parse non UTF-8. It's a known restriction. The problem is it actually parses the file and allows to filter it partially in python and fully in go. It would be nice if bblfsh could do utf8.IsValid and reject the file early with a correct error message.

ncordon commented 4 years ago

Is the character utf8 or it isn't? There is a filter in the Python client to prevent non utf8 characters from being parsed. I think that the problem is that Python concept of utf8 character may be different than Go's one. At least Python says that the character is utf8 and translates it into b'\xe2\x80\x9d' (3 bytes ?). Maybe to compute the Prefix field Go only takes into account utf8 characters of 2 bytes at most (don't know if that makes sense)

smacker commented 4 years ago

Correct. It's valid utf8 file from Go point of view:

package main

import (
    "fmt"
    "io/ioutil"
    "net/http"
    "unicode/utf8"
)

const fileURL = "https://github.com/pytorch/tutorials/raw/master/advanced_source/neural_style_tutorial.py"

func main() {
    httpRes, err := http.Get(fileURL)
    if err != nil {
        panic(err)
    }
    if httpRes.StatusCode != http.StatusOK {
        panic("not 200 status")
    }
    content, err := ioutil.ReadAll(httpRes.Body)
    if err != nil {
        panic(err)
    }

    fmt.Println("Is valid utf?", utf8.Valid(content))
}
$ go run check.go
Is valid utf? true
dennwc commented 4 years ago

I think the issues may be caused by this line. We incorrectly address tabs since the tab may be a non-ASCII character, but we access it as a byte.

The second cause might be related to position info, e.g. transformer may pull partial Unicode character if the byte offset is one-off.

But in general we need to narrow this case down to a single input line.