explosion / spacy-llm

🦙 Integrating LLMs into structured NLP pipelines
https://spacy.io/usage/large-language-models
MIT License
1.05k stars 83 forks source link

How to surpass BERT through large models #442

Open tianchiguaixia opened 6 months ago

tianchiguaixia commented 6 months ago

The current disadvantage of doing NER for large models is that they cannot achieve the effect of fine-tuning BERT. Is there any way to solve it. For example, through prompt words and so on. If the large model can achieve this, it will greatly reduce labor costs

rmitsch commented 6 months ago

Hi @tianchiguaixia, LLM performance for extractive tasks can be improved by (1) tuning your prompt, (2) providing example, (3) fine-tuning your LLM.