yangheng95 / PyABSA

Sentiment Analysis, Text Classification, Text Augmentation, Text Adversarial defense, etc.;
https://pyabsa.readthedocs.io
MIT License
957 stars 162 forks source link

Tried Running the model with simple instructions posted in readme file. Threw error #294

Open 1shoaibazhar opened 1 year ago

1shoaibazhar commented 1 year ago

Error: Traceback (most recent call last): File "/home/shoaib/.local/lib/python3.10/site-packages/pyabsa/tasks/AspectPolarityClassification/dataset_utils/lcf/apc_utils.py", line 385, in configure_spacy_model nlp = spacy.load(config.spacy_model) File "/home/shoaib/.local/lib/python3.10/site-packages/spacy/init.py", line 54, in load return util.load_model( File "/home/shoaib/.local/lib/python3.10/site-packages/spacy/util.py", line 449, in load_model raise IOError(Errors.E050.format(name=name)) OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a Python package or a valid path to a data directory.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/shoaib/.local/lib/python3.10/site-packages/pyabsa/tasks/AspectPolarityClassification/dataset_utils/lcf/apc_utils.py", line 397, in configure_spacy_model nlp = spacy.load(config.spacy_model) File "/home/shoaib/.local/lib/python3.10/site-packages/spacy/init.py", line 54, in load return util.load_model( File "/home/shoaib/.local/lib/python3.10/site-packages/spacy/util.py", line 449, in load_model raise IOError(Errors.E050.format(name=name)) OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a Python package or a valid path to a data directory.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/shoaib/Personal Learning/PyAbsa/Test Codes/aspectTermExtractor.py", line 12, in aspect_extractor.predict(['I love this movie, it is so great!'], File "/home/shoaib/.local/lib/python3.10/site-packages/pyabsa/tasks/AspectTermExtraction/prediction/aspect_extractor.py", line 264, in predict return self.batch_predict( File "/home/shoaib/.local/lib/python3.10/site-packages/pyabsa/tasks/AspectTermExtraction/prediction/aspect_extractor.py", line 304, in batch_predict extraction_res, sentence_res = self._extract(target_file) File "/home/shoaib/.local/lib/python3.10/site-packages/pyabsa/tasks/AspectTermExtraction/prediction/aspect_extractor.py", line 374, in _extract infer_features = convert_ate_examples_to_features( File "/home/shoaib/.local/lib/python3.10/site-packages/pyabsa/tasks/AspectTermExtraction/dataset_utils/lcf/data_utils_for_inference.py", line 168, in convert_ate_examples_to_features configure_spacy_model(config) File "/home/shoaib/.local/lib/python3.10/site-packages/pyabsa/tasks/AspectPolarityClassification/dataset_utils/lcf/apc_utils.py", line 399, in configure_spacy_model raise RuntimeError( RuntimeError: Download failed, you can download en_core_web_sm manually.

yangheng95 commented 1 year ago

Please check if you can download the en_core_web_sm: pip install spacy python -m spacy download en_core_web_sm

chen-bowen commented 1 year ago

doesn't work for me. I tried the above suggested steps but getting this error now

--------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[1], line 12
      6 aspect_extractor = ATEPC.AspectExtractor('english',
      7                                          auto_device=True,  # False means load model on CPU
      8                                          cal_perplexity=True,
      9                                          )
     11 # instance inference
---> 12 aspect_extractor.predict(['I love this movie, it is so great!'],
     13                          save_result=True,
     14                          print_result=True,  # print the result
     15                          ignore_error=True,  # ignore the error when the model cannot predict the input
     16                          )

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/pyabsa/tasks/AspectTermExtraction/prediction/aspect_extractor.py:264, in AspectExtractor.predict(self, text, save_result, print_result, pred_sentiment, **kwargs)
    260     return self.batch_predict(
    261         [text], save_result, print_result, pred_sentiment, **kwargs
    262     )[0]
    263 elif isinstance(text, list):
--> 264     return self.batch_predict(
    265         text, save_result, print_result, pred_sentiment, **kwargs
    266     )

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/pyabsa/tasks/AspectTermExtraction/prediction/aspect_extractor.py:304, in AspectExtractor.batch_predict(self, target_file, save_result, print_result, pred_sentiment, **kwargs)
    299     raise ValueError(
    300         "Please run inference using examples list or inference dataset path (list)!"
    301     )
    303 if target_file:
--> 304     extraction_res, sentence_res = self._extract(target_file)
    305     results["extraction_res"] = extraction_res
    306     if pred_sentiment:

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/pyabsa/tasks/AspectTermExtraction/prediction/aspect_extractor.py:449, in AspectExtractor._extract(self, examples)
    447 l_mask = l_mask.to(self.config.device)
    448 with torch.no_grad():
--> 449     ate_logits, apc_logits = self.model(
    450         input_ids_spc,
    451         token_type_ids=segment_ids,
    452         attention_mask=input_mask,
    453         labels=None,
    454         polarity=polarity,
    455         valid_ids=valid_ids,
    456         attention_mask_label=l_mask,
    457     )
    458 if self.config.use_bert_spc:
    459     label_ids = self.model.get_batch_token_labels_bert_base_indices(
    460         label_ids
    461     )

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
   1106 # If we don't have any hooks, we want to skip the rest of the logic in
   1107 # this function, and just call forward.
   1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110     return forward_call(*input, **kwargs)
   1111 # Do not call functions when jit is used
   1112 full_backward_hooks, non_full_backward_hooks = [], []

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/pyabsa/tasks/AspectTermExtraction/models/__lcf__/fast_lcf_atepc.py:75, in FAST_LCF_ATEPC.forward(self, input_ids_spc, token_type_ids, attention_mask, labels, polarity, valid_ids, attention_mask_label, lcf_cdm_vec, lcf_cdw_vec)
     73     input_ids = self.get_ids_for_local_context_extractor(input_ids_spc)
     74     labels = self.get_batch_token_labels_bert_base_indices(labels)
---> 75     global_context_out = self.bert4global(
     76         input_ids=input_ids, attention_mask=attention_mask
     77     )["last_hidden_state"]
     78 else:
     79     global_context_out = self.bert4global(
     80         input_ids=input_ids_spc, attention_mask=attention_mask
     81     )["last_hidden_state"]

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
   1106 # If we don't have any hooks, we want to skip the rest of the logic in
   1107 # this function, and just call forward.
   1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110     return forward_call(*input, **kwargs)
   1111 # Do not call functions when jit is used
   1112 full_backward_hooks, non_full_backward_hooks = [], []

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:1049, in DebertaV2Model.forward(self, input_ids, attention_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict)
   1039     token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
   1041 embedding_output = self.embeddings(
   1042     input_ids=input_ids,
   1043     token_type_ids=token_type_ids,
   (...)
   1046     inputs_embeds=inputs_embeds,
   1047 )
-> 1049 encoder_outputs = self.encoder(
   1050     embedding_output,
   1051     attention_mask,
   1052     output_hidden_states=True,
   1053     output_attentions=output_attentions,
   1054     return_dict=return_dict,
   1055 )
   1056 encoded_layers = encoder_outputs[1]
   1058 if self.z_steps > 1:

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
   1106 # If we don't have any hooks, we want to skip the rest of the logic in
   1107 # this function, and just call forward.
   1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110     return forward_call(*input, **kwargs)
   1111 # Do not call functions when jit is used
   1112 full_backward_hooks, non_full_backward_hooks = [], []

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:465, in DebertaV2Encoder.forward(self, hidden_states, attention_mask, output_hidden_states, output_attentions, query_states, relative_pos, return_dict)
    463     input_mask = (attention_mask.sum(-2) > 0).byte()
    464 attention_mask = self.get_attention_mask(attention_mask)
--> 465 relative_pos = self.get_rel_pos(hidden_states, query_states, relative_pos)
    467 all_hidden_states = () if output_hidden_states else None
    468 all_attentions = () if output_attentions else None

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:445, in DebertaV2Encoder.get_rel_pos(self, hidden_states, query_states, relative_pos)
    443 if self.relative_attention and relative_pos is None:
    444     q = query_states.size(-2) if query_states is not None else hidden_states.size(-2)
--> 445     relative_pos = build_relative_position(
    446         q, hidden_states.size(-2), bucket_size=self.position_buckets, max_position=self.max_relative_positions
    447     )
    448 return relative_pos

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:564, in build_relative_position(query_size, key_size, bucket_size, max_position)
    562 rel_pos_ids = q_ids[:, None] - np.tile(k_ids, (q_ids.shape[0], 1))
    563 if bucket_size > 0 and max_position > 0:
--> 564     rel_pos_ids = make_log_bucket_position(rel_pos_ids, bucket_size, max_position)
    565 rel_pos_ids = torch.tensor(rel_pos_ids, dtype=torch.long)
    566 rel_pos_ids = rel_pos_ids[:query_size, :]

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:538, in make_log_bucket_position(relative_pos, bucket_size, max_position)
    536 abs_pos = np.where((relative_pos < mid) & (relative_pos > -mid), mid - 1, np.abs(relative_pos))
    537 log_pos = np.ceil(np.log(abs_pos / mid) / np.log((max_position - 1) / mid) * (mid - 1)) + mid
--> 538 bucket_pos = np.where(abs_pos <= mid, relative_pos, log_pos * sign).astype(np.int)
    539 return bucket_pos

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/numpy/__init__.py:305, in __getattr__(attr)
    300     warnings.warn(
    301         f"In the future `np.{attr}` will be defined as the "
    302         "corresponding NumPy scalar.", FutureWarning, stacklevel=2)
    304 if attr in __former_attrs__:
--> 305     raise AttributeError(__former_attrs__[attr])
    307 # Importing Tester requires importing all of UnitTest which is not a
    308 # cheap import Since it is mainly used in test suits, we lazy import it
    309 # here to save on the order of 10 ms of import time for most users
    310 #
    311 # The previous way Tester was imported also had a side effect of adding
    312 # the full `numpy.testing` namespace
    313 if attr == 'testing':

AttributeError: module 'numpy' has no attribute 'int'.
`np.int` was a deprecated alias for the builtin `int`. To avoid this error in existing code, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
    https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

/Users/bowen.chen/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/prompt_toolkit/application/application.py:955: DeprecationWarning: There is no current event loop
  loop = asyncio.get_event_loop()

Any ideas?

yangheng95 commented 1 year ago

doesn't work for me. I tried the above suggested steps but getting this error now

--------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[1], line 12
      6 aspect_extractor = ATEPC.AspectExtractor('english',
      7                                          auto_device=True,  # False means load model on CPU
      8                                          cal_perplexity=True,
      9                                          )
     11 # instance inference
---> 12 aspect_extractor.predict(['I love this movie, it is so great!'],
     13                          save_result=True,
     14                          print_result=True,  # print the result
     15                          ignore_error=True,  # ignore the error when the model cannot predict the input
     16                          )

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/pyabsa/tasks/AspectTermExtraction/prediction/aspect_extractor.py:264, in AspectExtractor.predict(self, text, save_result, print_result, pred_sentiment, **kwargs)
    260     return self.batch_predict(
    261         [text], save_result, print_result, pred_sentiment, **kwargs
    262     )[0]
    263 elif isinstance(text, list):
--> 264     return self.batch_predict(
    265         text, save_result, print_result, pred_sentiment, **kwargs
    266     )

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/pyabsa/tasks/AspectTermExtraction/prediction/aspect_extractor.py:304, in AspectExtractor.batch_predict(self, target_file, save_result, print_result, pred_sentiment, **kwargs)
    299     raise ValueError(
    300         "Please run inference using examples list or inference dataset path (list)!"
    301     )
    303 if target_file:
--> 304     extraction_res, sentence_res = self._extract(target_file)
    305     results["extraction_res"] = extraction_res
    306     if pred_sentiment:

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/pyabsa/tasks/AspectTermExtraction/prediction/aspect_extractor.py:449, in AspectExtractor._extract(self, examples)
    447 l_mask = l_mask.to(self.config.device)
    448 with torch.no_grad():
--> 449     ate_logits, apc_logits = self.model(
    450         input_ids_spc,
    451         token_type_ids=segment_ids,
    452         attention_mask=input_mask,
    453         labels=None,
    454         polarity=polarity,
    455         valid_ids=valid_ids,
    456         attention_mask_label=l_mask,
    457     )
    458 if self.config.use_bert_spc:
    459     label_ids = self.model.get_batch_token_labels_bert_base_indices(
    460         label_ids
    461     )

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
   1106 # If we don't have any hooks, we want to skip the rest of the logic in
   1107 # this function, and just call forward.
   1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110     return forward_call(*input, **kwargs)
   1111 # Do not call functions when jit is used
   1112 full_backward_hooks, non_full_backward_hooks = [], []

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/pyabsa/tasks/AspectTermExtraction/models/__lcf__/fast_lcf_atepc.py:75, in FAST_LCF_ATEPC.forward(self, input_ids_spc, token_type_ids, attention_mask, labels, polarity, valid_ids, attention_mask_label, lcf_cdm_vec, lcf_cdw_vec)
     73     input_ids = self.get_ids_for_local_context_extractor(input_ids_spc)
     74     labels = self.get_batch_token_labels_bert_base_indices(labels)
---> 75     global_context_out = self.bert4global(
     76         input_ids=input_ids, attention_mask=attention_mask
     77     )["last_hidden_state"]
     78 else:
     79     global_context_out = self.bert4global(
     80         input_ids=input_ids_spc, attention_mask=attention_mask
     81     )["last_hidden_state"]

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
   1106 # If we don't have any hooks, we want to skip the rest of the logic in
   1107 # this function, and just call forward.
   1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110     return forward_call(*input, **kwargs)
   1111 # Do not call functions when jit is used
   1112 full_backward_hooks, non_full_backward_hooks = [], []

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:1049, in DebertaV2Model.forward(self, input_ids, attention_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict)
   1039     token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
   1041 embedding_output = self.embeddings(
   1042     input_ids=input_ids,
   1043     token_type_ids=token_type_ids,
   (...)
   1046     inputs_embeds=inputs_embeds,
   1047 )
-> 1049 encoder_outputs = self.encoder(
   1050     embedding_output,
   1051     attention_mask,
   1052     output_hidden_states=True,
   1053     output_attentions=output_attentions,
   1054     return_dict=return_dict,
   1055 )
   1056 encoded_layers = encoder_outputs[1]
   1058 if self.z_steps > 1:

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs)
   1106 # If we don't have any hooks, we want to skip the rest of the logic in
   1107 # this function, and just call forward.
   1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110     return forward_call(*input, **kwargs)
   1111 # Do not call functions when jit is used
   1112 full_backward_hooks, non_full_backward_hooks = [], []

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:465, in DebertaV2Encoder.forward(self, hidden_states, attention_mask, output_hidden_states, output_attentions, query_states, relative_pos, return_dict)
    463     input_mask = (attention_mask.sum(-2) > 0).byte()
    464 attention_mask = self.get_attention_mask(attention_mask)
--> 465 relative_pos = self.get_rel_pos(hidden_states, query_states, relative_pos)
    467 all_hidden_states = () if output_hidden_states else None
    468 all_attentions = () if output_attentions else None

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:445, in DebertaV2Encoder.get_rel_pos(self, hidden_states, query_states, relative_pos)
    443 if self.relative_attention and relative_pos is None:
    444     q = query_states.size(-2) if query_states is not None else hidden_states.size(-2)
--> 445     relative_pos = build_relative_position(
    446         q, hidden_states.size(-2), bucket_size=self.position_buckets, max_position=self.max_relative_positions
    447     )
    448 return relative_pos

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:564, in build_relative_position(query_size, key_size, bucket_size, max_position)
    562 rel_pos_ids = q_ids[:, None] - np.tile(k_ids, (q_ids.shape[0], 1))
    563 if bucket_size > 0 and max_position > 0:
--> 564     rel_pos_ids = make_log_bucket_position(rel_pos_ids, bucket_size, max_position)
    565 rel_pos_ids = torch.tensor(rel_pos_ids, dtype=torch.long)
    566 rel_pos_ids = rel_pos_ids[:query_size, :]

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:538, in make_log_bucket_position(relative_pos, bucket_size, max_position)
    536 abs_pos = np.where((relative_pos < mid) & (relative_pos > -mid), mid - 1, np.abs(relative_pos))
    537 log_pos = np.ceil(np.log(abs_pos / mid) / np.log((max_position - 1) / mid) * (mid - 1)) + mid
--> 538 bucket_pos = np.where(abs_pos <= mid, relative_pos, log_pos * sign).astype(np.int)
    539 return bucket_pos

File ~/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/numpy/__init__.py:305, in __getattr__(attr)
    300     warnings.warn(
    301         f"In the future `np.{attr}` will be defined as the "
    302         "corresponding NumPy scalar.", FutureWarning, stacklevel=2)
    304 if attr in __former_attrs__:
--> 305     raise AttributeError(__former_attrs__[attr])
    307 # Importing Tester requires importing all of UnitTest which is not a
    308 # cheap import Since it is mainly used in test suits, we lazy import it
    309 # here to save on the order of 10 ms of import time for most users
    310 #
    311 # The previous way Tester was imported also had a side effect of adding
    312 # the full `numpy.testing` namespace
    313 if attr == 'testing':

AttributeError: module 'numpy' has no attribute 'int'.
`np.int` was a deprecated alias for the builtin `int`. To avoid this error in existing code, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
    https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

/Users/bowen.chen/opt/anaconda3/envs/mlqu/lib/python3.10/site-packages/prompt_toolkit/application/application.py:955: DeprecationWarning: There is no current event loop
  loop = asyncio.get_event_loop()

Any ideas?

try pip uninstall numpy and then pip install numpy

chen-bowen commented 1 year ago

Same error. Looks like this model is requiring an really old version of numpy <1.20, but pip install would install numpy 1.22.4, which would not solve the issue. I was using this package on python 3.10.6. Looks like python 3.10.9 fix this behavior, but unfortunately I can't use python 3.10.6 due to a product requirement. Would it be possible if you fix this line bucket_pos = np.where(abs_pos <= mid, relative_pos, log_pos * sign).astype(np.int) in modeling_deberta_v2.py? (Would probably just need a retrain) The base deberta_v2 model on huggingface has been updated according to here https://github.com/huggingface/transformers/blob/main/src/transformers/models/deberta_v2/modeling_deberta_v2.py.

yangheng95 commented 1 year ago

Then you can revise the code, try np.int_ instead in this code: bucket_pos = np.where(abs_pos <= mid, relative_pos, log_pos * sign).astype(np.int)

yangheng95 commented 1 year ago

I notice that this error has been fixed in transfomers, so I supposed it would work if you update the transformers version

chen-bowen commented 1 year ago

That's another product requirement I have to use transformers version 4.18.0, is there any other way this could work?

yangheng95 commented 1 year ago

You can clone transformers repo and switch to 4.18.0 version, then move the src/transformers to your project path to make it an local package. Then revise the modeling_deberta_v2.py.

chen-bowen commented 1 year ago

thank you, going to give it a try

chen-bowen commented 1 year ago

I'm a little confused. If transformers are modified, do we need to modify pyabsa too? How does pyabsa know which transformers to use?

yangheng95 commented 1 year ago

No need to revise PyABSA at this issue. If you place the transformers package in the current working dir, it would overload the transformers package installed by pip or conda, etc.