yangheng95 / PyABSA

Sentiment Analysis, Text Classification, Text Augmentation, Text Adversarial defense, etc.;
https://pyabsa.readthedocs.io
MIT License
926 stars 158 forks source link

Automatic extraction of sentiment driver possible? #285

Closed ghost closed 1 year ago

ghost commented 1 year ago

I want to have the sentiment driver and the polarity for an entity extracted. I'm aware you can do Aspect Sentiment Triplet Extraction, but you need to know the sentiment driver and the polarity beforehand, right? To add it as an argument:

'The cake tastes great.####[([1], [4], 'POS')]'
> {'Aspect': 'cake', 'Opinion': 'great', 'Polarity': 'Positive'}

Is there a way to use ASTE by only giving it the entity and have the sentiment and polarity extracted automatically, like this?

'The cake tastes great.####[1]'
> {'Aspect': 'cake', 'Opinion': 'great', 'Polarity': 'Positive'}

Or can you get the sentiment driver extracted with APC somehow?

PS: I accidentally labelled this issue as a bug. It's not a bug :)

yangheng95 commented 1 year ago

These tags are just for human reference and invisble for models. You can input like "The cake tastes great."

image
ghost commented 1 year ago

That's clear, thank you!

But there is no possibility to give a text and a list of entities as arguments, to get their polarity and opinion extracted?

Input: 'The cake tastes great.', ['cake'] Output {'Aspect': 'cake', 'Opinion': 'great', 'Polarity': 'Positive'}

yangheng95 commented 1 year ago

https://github.com/yangheng95/PyABSA/blob/v2/examples-v2/aspect_polarity_classification/inference.py

ghost commented 1 year ago

I'm aware, but this only returns the polarity without the opinion/sentiment driver.

{'text': 'The food was good, but the service was terrible.',
  'aspect': ['food', 'service'],
  'sentiment': ['Positive', 'Negative'],
  'confidence': [0.9846736192703247, 0.974354088306427],
  'probs': [array([0.01351087, 0.00181547, 0.9846736 ], dtype=float32),
   array([0.9743541 , 0.00281978, 0.02282613], dtype=float32)],
  'ref_sentiment': ['-100', '-100'],
  'ref_check': ['', ''],
  'perplexity': 'N.A.'},

 {'text': 'The food was terrible, but the service was good.',
  'aspect': ['food', 'service'],
  'sentiment': ['Negative', 'Positive'],
  'confidence': [0.9592238068580627, 0.9870247840881348],
  'probs': [array([0.9592238 , 0.00441253, 0.03636366], dtype=float32),
   array([0.01152945, 0.00144578, 0.9870248 ], dtype=float32)],
  'ref_sentiment': ['-100', '-100'],
  'ref_check': ['', ''],
  'perplexity': 'N.A.'},

 {'text': 'The food was so-so, and the service was terrible.',
  'aspect': ['food', 'service'],
  'sentiment': ['Neutral', 'Negative'],
  'confidence': [0.7357237339019775, 0.9721798300743103],
  'probs': [array([0.19523466, 0.73572373, 0.06904164], dtype=float32),
   array([0.97217983, 0.00251   , 0.02531017], dtype=float32)],
  'ref_sentiment': ['-100', '-100'],
  'ref_check': ['', ''],
  'perplexity': 'N.A.'}
]

Is there a way predict also returns the opinion/sentiment drivers ("so-so", "terrible", "good")?

yangheng95 commented 1 year ago

Then there is no other choices in this repo

ghost commented 1 year ago

Thanks, Heng! 🙏🏻