goodmami / wn

A modern, interlingual wordnet interface for Python
https://wn.readthedocs.io/
MIT License
197 stars 19 forks source link

Support for PTB and Universal POS tags #162

Closed LifeIsStrange closed 2 years ago

LifeIsStrange commented 2 years ago

Issue text updated by @goodmami

This issue appears to be a request to automatically map other part-of-speech tag schemes (such as PTB and Universal POS) to the ones used by wordnets so that a lookup for, e.g., wn.words('dog', pos='VERB') is equivalent to wn.words('dog', pos='v'). I'm not sure if the request is to also support reverse mappings (e.g., synset.ptb_pos).

Original issue text:

https://github.com/nltk/nltk/pull/2965

goodmami commented 2 years ago

@LifeIsStrange can you please explain what you want with this issue? I see that the linked NLTK pull request is something to do with PTB style POS tags, but I don't know what you want here.

LifeIsStrange commented 2 years ago

Hmm one of the use case I see would be:

wn.words(pos='v')

The wn APIs that accept a pos argument could allow the universal pos tag and pen tree bank pos tags as equivalent values. (since the pos can be obtained from an external program that may use those popular tagging schemes) I don't know wether the nltk PR cover more use cases

The original NLTK issue explain it better than I do: https://github.com/nltk/nltk/issues/2963

goodmami commented 2 years ago

Ok, I think I understand. I've updated the original issue text to clarify (please update if it's inaccurate).

However, my initial reaction is that this is not a good fit for Wn. Unlike the NLTK, Wn is not trying to accommodate a wide range of NLP tasks, but is specifically about modeling and working with wordnet data as defined by WN-LMF. I would therefore suggest using another tag mapper with Wn, such as the NLTK's nltk.tag.mapping (but I'm not sure if it supports the wordnet tagset). If it does, you could write a wrapper function:

def ptb_synsets(lemma: str = None, pos: str = None, *args, **kwargs) -> wn.Synset:
  if pos:
    pos = map_tag('en-ptb', 'wordnet', pos)
  return wn.synsets(lemma, pos, *args, **kwargs)
odds-get-evened commented 2 years ago

i found a case with 'p'???. keeps throwing an error. seems any POS tag with p in it will not have an index assignment.:

>> idk it's some kind of lame ai program or app
[('idk', 'NN'), ('it', 'PRP'), ("'s", 'VBZ'), ('some', 'DT'), ('kind', 'NN'), ('of', 'IN'), ('lame', 'JJ'), ('ai', 'JJ'), ('program'
, 'NN'), ('or', 'CC'), ('app', 'NN')]
  idk/NN
  File "C:\Users\chris\AppData\Local\Programs\Python\Python310\lib\site-packages\nltk\corpus\reader\sentiwordnet.py", line 94, in se
nti_synsets
    synset_list = wn.synsets(string, pos)
  File "C:\Users\chris\AppData\Local\Programs\Python\Python310\lib\site-packages\nltk\corpus\reader\wordnet.py", line 1700, in synse
ts
    return [
  File "C:\Users\chris\AppData\Local\Programs\Python\Python310\lib\site-packages\nltk\corpus\reader\wordnet.py", line 1703, in <list
comp>
    for form in self._morphy(lemma, p, check_exceptions)
  File "C:\Users\chris\AppData\Local\Programs\Python\Python310\lib\site-packages\nltk\corpus\reader\wordnet.py", line 2008, in _morp
hy
    exceptions = self._exception_map[pos]
KeyError: 'p'
def process_user_input(user_input: str, db: TinyDB):
    if len(user_input) > 0:
        # train(user_input, db)
        # response
        tokens = nltk.word_tokenize(user_input)
        tags = nltk.pos_tag(tokens)
        # find entities
        # entities = nltk.chunk.ne_chunk(tags)
        [print(list(sentiwordnet.senti_synsets(tag[0], tag[1].lower()))) for tag in tags]
goodmami commented 2 years ago

@white5moke it appears you are using the NLTK in your example. This repository is a standalone project called Wn and is not part of the NLTK.

While the NLTK appears to raise an error for an unknown part of speech, in Wn, p is in fact a valid part of speech, but no wordnet (that I'm aware of) makes use of it. Using pos='p' should just return an empty list:

>>> import wn
>>> import wn.constants
>>> wn.constants.ADPOSITION
'p'
>>> wn.synsets(pos='p')
[]

Wn will also return an empty list (instead of an error) for an invalid part of speech:

>>> wn.constants.PARTS_OF_SPEECH
frozenset({'t', 'r', 's', 'p', 'v', 'a', 'n', 'u', 'x', 'c'})
>>> wn.synsets(pos='b')  # b is not a defined part of speech
[]
goodmami commented 2 years ago

Since support for non-wordnet POS schemes is not currently part of the roadmap, so I will close this as wontfix.