argilla-io / argilla

Argilla is a collaboration tool for AI engineers and domain experts to build high-quality datasets
https://docs.argilla.io
Apache License 2.0
3.73k stars 351 forks source link

[Bug] Token Classification emojis cause overlapping spans error & wrong annotations #2353

Open cceyda opened 1 year ago

cceyda commented 1 year ago

Describe the bug If there is a prediction annotation mismatch + an emoji ๐Ÿ’š (haven't tested with other emojis) image

on the UI this shows error:

image

I was told clearing all annotations and then annotating and saving works sometimes!

on the server side caused by ValueError: IOB tags cannot handle overlapping spans!

argilla_1        | ERROR:    Exception in ASGI application
argilla_1        | Traceback (most recent call last):
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
argilla_1        |     result = await app(  # type: ignore[func-returns-value]
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
argilla_1        |     return await self.app(scope, receive, send)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/fastapi/applications.py", line 270, in __call__
argilla_1        |     await super().__call__(scope, receive, send)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/applications.py", line 124, in __call__
argilla_1        |     await self.middleware_stack(scope, receive, send)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 184, in __call__
argilla_1        |     raise exc
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 162, in __call__
argilla_1        |     await self.app(scope, receive, _send)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
argilla_1        |     raise exc
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
argilla_1        |     await self.app(scope, receive, sender)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
argilla_1        |     raise e
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
argilla_1        |     await self.app(scope, receive, send)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 706, in __call__
argilla_1        |     await route.handle(scope, receive, send)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 443, in handle
argilla_1        |     await self.app(scope, receive, send)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/fastapi/applications.py", line 270, in __call__
argilla_1        |     await super().__call__(scope, receive, send)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/applications.py", line 124, in __call__
argilla_1        |     await self.middleware_stack(scope, receive, send)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 184, in __call__
argilla_1        |     raise exc
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 162, in __call__
argilla_1        |     await self.app(scope, receive, _send)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/brotli_asgi/__init__.py", line 85, in __call__
argilla_1        |     await responder(scope, receive, send)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 44, in __call__
argilla_1        |     await self.app(scope, receive, self.send_with_gzip)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/middleware/cors.py", line 92, in __call__
argilla_1        |     await self.simple_response(scope, receive, send, request_headers=headers)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/middleware/cors.py", line 147, in simple_response
argilla_1        |     await self.app(scope, receive, send)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
argilla_1        |     raise exc
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
argilla_1        |     await self.app(scope, receive, sender)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
argilla_1        |     raise e
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
argilla_1        |     await self.app(scope, receive, send)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 706, in __call__
argilla_1        |     await route.handle(scope, receive, send)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 276, in handle
argilla_1        |     await self.app(scope, receive, send)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 66, in app
argilla_1        |     response = await func(request)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 235, in app
argilla_1        |     raw_response = await run_endpoint_function(
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 161, in run_endpoint_function
argilla_1        |     return await dependant.call(**values)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/argilla/server/apis/v0/handlers/token_classification.py", line 125, in bulk_records
argilla_1        |     result = await service.add_records(
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/argilla/server/services/tasks/token_classification/service.py", line 64, in add_records
argilla_1        |     failed = await self.__storage__.store_records(
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/argilla/server/services/storage/service.py", line 66, in store_records
argilla_1        |     record.metrics = metrics.record_metrics(record)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/argilla/server/services/tasks/token_classification/metrics.py", line 296, in record_metrics
argilla_1        |     annotated_tags = cls._compute_iob_tags(span_utils, record.annotation) or []
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/argilla/server/services/tasks/token_classification/metrics.py", line 340, in _compute_iob_tags
argilla_1        |     return span_utils.to_tags(spans)
argilla_1        |   File "/usr/local/lib/python3.9/site-packages/argilla/utils/span_utils.py", line 163, in to_tags
argilla_1        |     raise ValueError("IOB tags cannot handle overlapping spans!")
argilla_1        | ValueError: IOB tags cannot handle overlapping spans!

Steps to reproduce I'm using char tokens:

text='๐Ÿ’šabcd๐Ÿ’šefgjklm.04 / abcde fgjklm (aaaaa)'
tokens=['๐Ÿ’š','a','b','c','d','๐Ÿ’š','e','f','g','j','k','l','m','.','0','4','/','a', 'b','c','d','e','f','g','j','k','l', 'm','(','a','a','a','a','a',')']
entities= [('A', 6, 13), ('B', 25, 31), ('C', 33, 38)]
record = rg.TokenClassificationRecord(
        text=text,
        tokens=tokens,
        prediction=entities,
        prediction_agent="annotator",
        # annotation=entities,
        # annotation_agent="old",
        metadata= {},
        status='Default',
        id=0
    )

โš ๏ธ This also causes the pred alignment to be off by 1char on the UI

Environment (please complete the following information):

Additional context If you are validating multiple records at once and one of them fails the others fail too, and if the annotator is not careful about the toast error message shown moves on to the next page then their annotation on the prev page are lost. Maybe we can show an alert() popup when there are unsaved annotations and the user is trying to navigate out?

tomaarsen commented 1 year ago

I've found a related issue.

import argilla as rg

records = [
    rg.TokenClassificationRecord(
        text="I โค๏ธ you", tokens=["I", "โค๏ธ", "you"], prediction=[("I", 0, 1), ("emoji", 2, 4), ("you", 5, 8)]
    ),
    rg.TokenClassificationRecord(
        text="I ๐Ÿ’š you", tokens=["I", "๐Ÿ’š", "you"], prediction=[("I", 0, 1), ("emoji", 2, 3), ("you", 4, 7)]
    ),
    rg.TokenClassificationRecord(
        text="I h you", tokens=["I", "h", "you"], prediction=[("I", 0, 1), ("emoji", 2, 3), ("you", 4, 7)]
    ),
]

rg.delete("issue_2353_emoji")
rg.log(records, "issue_2353_emoji")

produces: image image

Note also the following awkward Python behaviour:

>>> len("h")
1
>>> len("๐Ÿ’š")
1
>>> len("โค๏ธ")
2
>>> 
cceyda commented 1 year ago

emoji, halfwidth chars, double chars are never easy to work with ๐Ÿฅฒ

tomaarsen commented 1 year ago

I've expanded on @cceyda's useful example with the red heart to show that the issue only exists with the colored one:

import argilla as rg

rg.delete("issue_2353_emoji")

tokens = ["๐Ÿ’š", "a", "b", "c", "d", "๐Ÿ’š", "e", "f", "g", "j", "k", "l"]
text = "๐Ÿ’šabcd๐Ÿ’šefgjkl"
entities = [("A", 6, 9)]
assert text[6:9] == "efg"
record = rg.TokenClassificationRecord(
    text=text,
    tokens=tokens,
    prediction=entities,
)

rg.log(record, "issue_2353_emoji")

tokens = ["โค๏ธ", "a", "b", "c", "d", "โค๏ธ", "e", "f", "g", "j", "k", "l"]
text = "โค๏ธabcdโค๏ธefgjkl"
entities = [("A", 8, 11)]
assert text[8:11] == "efg"
record = rg.TokenClassificationRecord(
    text=text,
    tokens=tokens,
    prediction=entities,
)

rg.log(record, "issue_2353_emoji")

produces: image image

tomaarsen commented 1 year ago

image And this seems to be the issue. In Python, the green heart only has length 1, while on the JS side it has length 2. As a result, the required prediction start and end spans on Python simply slice differently on JS. Note for example that in my last comment I had to use different prediction spans to select only "efg".

cceyda commented 1 year ago

I saw there was a related PR 1year ago https://github.com/argilla-io/argilla/commit/8b570fbb514e2d823a9b8efb839f80d69d93ecac

dvsrepo commented 1 year ago

Maybe @leiyre can give more context

cceyda commented 1 year ago

https://hsivonen.fi/string-length/ "๐Ÿคฆ๐Ÿผโ€โ™‚๏ธ","๐Ÿคฆ๐Ÿผ","๐Ÿ’–", "๐Ÿ’˜", "๐Ÿ’", "๐Ÿ’ž", "โฃ๏ธ", "โœจ". I think converting between js<->py lengths can solve this. there are too many emojis and strange width chars when working with multiple languages & this would be critical

leiyre commented 1 year ago

Hi, in https://github.com/argilla-io/argilla/commit/8b570fbb514e2d823a9b8efb839f80d69d93ecac we included stringz in the front to avoid the problem with unicode and javascript. @tomaarsen could I see your example in dev or pre to explore a bit more the behavior?

tomaarsen commented 1 year ago

@leiyre I've pushed my latest example to dev under issue_2353_emoji. I think you should be able to access it there now.

cceyda commented 1 year ago

Here is another thing:

https://user-images.githubusercontent.com/15624271/219418599-f0879a98-fa39-4fd7-8499-f9ddd58d54c2.mov

I mean I understand why it happens but not how to fix it ๐Ÿคฃ . Just playing around in a notebook you can see why

emojis=["๐Ÿ’–", "๐Ÿ’˜", "๐Ÿ’", "๐Ÿ’ž","๐Ÿ’š","โฃ๏ธ", "โœจ","๐Ÿคฆ๐Ÿผโ€โ™‚๏ธ","๐Ÿคฆ๐Ÿผ"]
for e in emojis:
    print(e,len(e),list(e))
image

I don't really care about the visual artifact on the UI as long as the annotation boundries are passed correctly when using in python! like text[annotation.start:annotation.end]

label_map={
    "A" : "location","B" : "organization","C" : "building", "D" : "vegetable","E" : "fruit","F" : "street","G" : "apt", "J" : "no",
    "K" : "food","L" : "drink","M" : "alcohol","N" : "name","O" : "phone","P" : "etc","R" : "blah","S" : "something","T" : "somethingelse"
}
labels=list(label_map.keys())
texts=[
    'ABCDEFG ๐Ÿคฆ๐Ÿผโ€โ™‚๏ธ jklmnop',
    'ABCDEFG ๐Ÿ’ž jklmnop',
    'ABCDEFG ๐Ÿ’šjklmnop',
    'ABCDEFG (โฃ๏ธ) jklmnop'
    ]
records=[]
for i,text in enumerate(texts):
    record = rg.TokenClassificationRecord(
                    text=text,
                    tokens=list(text.replace(" ","")),
                    prediction=[(labels[i],i,i+1)],
                    prediction_agent="model",
                    # annotation=entities,
                    # annotation_agent="old",
                    metadata= { },
                    status='Default', 
                    id=i
                )
    records.append(record)
rg.log(records=records, name="test-emoji")

Also question would this tokens=list(text.replace(" ","")) be the way to pass chars to do char level token annotation. (maybe I'm doing it wrong, no spaces right?)

keithCuniah commented 1 year ago

Hola !

When looking at the problem, we saw with @leiyre that some updates need to be done in front. But, as pointed by @tomaarsen, the green heart and the red heart don't have the same length in python this is why I guess we received an offset of 2 in the prediction of the second sentence of the image. So a correction in the back is missing too to not have this offset.

image

cceyda commented 1 year ago

yes javascript uses UTF-16 encoding to calculate string lengths. While python counts codepoints(or utf-8 encoding bytes) The key concepts to understand are unicode code points,graphemes and utf-16 encoding. I meant to write a more detailed explanation... but I hope the below code will do a better job than words to show what the problem is. Hope this clarifies how chars are handled in python when list("blahblah") or when doing mystring[start_ind:end_ind] and differences with JS

import grapheme
string="๐Ÿ‘ฉโ€โค๏ธโ€๐Ÿ’‹โ€๐Ÿ‘ฉ๐Ÿฆํ•œABเฎคเฎฎเฎฟเฎดเฏ๐Ÿ’š๐Ÿคฆ๐Ÿผโ€โ™‚๏ธ" #example problematic strings I gathered from around the internet.Just need to be composed of more than one unicode codepoint to cause problems.
chars=list(grapheme.graphemes(string))
for char in chars:
    print(f"original:{char}")
    print(f"codepoints:{list(char)}")
    for l in list(char):
        print(l,ord(l))
        for i,b in enumerate(bytes(l,encoding='utf-8')):
            print(i,b)
        print("-----")
    print("#############")
cceyda commented 1 year ago

Also learned that in JS if you use array expansion(?) you can get the number of codepoints accurately (same as python)

[..."๐Ÿคฆ๐Ÿผโ€โ™‚๏ธ"].length
keithCuniah commented 1 year ago

I think you mean Spread operator. I found this article in connection to your comment, but we need to be carefull with this approach, since for example :

[..."๐Ÿ‘ฉโ€๐Ÿ’ป"].length = 3

cceyda commented 1 year ago

but that is how python counts too! It count's code points. How humans perceive a single letter(A,B,C etc)(can think of this as the grapheme) and how a single grapheme is represented (by one or more unicode points), and how those points are encoded (UTF-16, UTF-8) are all different things. Also the problem is not just emojis. It is everything that is represented by more than 1 codepoint (basically some non-english letters, accented chars, emojis) But emojis are an easy way to debug this

image

also was helpful: https://stackoverflow.com/a/51422499/3726119

On the UI side we want graphemes (using Intl.Segmenter() polyfill can work for this), on the backend we want to calculate codepoints = python. whereas JS natively (just doing "str".length) calculates UTF-16 encoding length.

tomaarsen commented 1 year ago

@cceyda If that is indeed consistently equivalent to the Python lengths, then perhaps we can use that to place the spans correctly by only changing the frontend? Or would we strictly need to use e.g. the grapheme library to extract the lengths on the Python side rather than using len(...)?

cceyda commented 1 year ago

I would heavily suggest not straying from the norm of using len() list() on the python side (ie counting codepoints), because that is basically how most tokenization libraries work (transformers,spacy... etc). Also grapheme library can become outdated if a new Unicode version gets released. I saw a very old PR for a native grapheme splitter on the cpython library, but who knows when that will be adopted.

We should use graphemes only on the UI side because that is what makes sense to human eyes ๐Ÿ‘€ ๐Ÿ˜„

github-actions[bot] commented 11 months ago

This issue is stale because it has been open for 90 days with no activity.

cceyda commented 11 months ago

bump as still important

github-actions[bot] commented 8 months ago

This issue is stale because it has been open for 90 days with no activity.

cceyda commented 8 months ago

bump