asyrofist / LangChainProposed

Langchain Proposed
MIT License
1 stars 0 forks source link

question about the task definition #1

Open Shiyun-W opened 8 months ago

Shiyun-W commented 8 months ago

Hi,

Thank you for your sharing, I am also trying to extract information from the articles. I have a question about your task definition: Are you define your task as a information extraction task? It means that your input is the whole text of the article, and the output are the elements that what you asked for in the prompt (like what is the methodology in the article?). So the LLM will directly extract the information you needed from the article. But what I wonder is how could the LLM know the text granularity you want? For example, some elements may need the sentence level, while some you may want they to be clauses. I want to know if the performance is good for the LLM to identify the boundary of different elements? And how could you evaluate the performance of their extraction performance? especially when the granularity is different.

Very hope to hearing from you soon! I would be very appreciate.

asyrofist commented 8 months ago

Yes, actually My prompt syntax that I build from this model for how we can make statement from several question that we ask to PDF.

If you want to extract more like POS tag or NER tag you can use spacy, NLTK or any library framework to extract that..

The prompt that we want to use based on Langchain from openAI or huggingFace.. So that's it, if you want talk more about every word extraction or something like that..

That's different between Systematic Literature Review (SLR) and Dependency Extraction. When do Dependency Extraction, we should use xml or relation models to know each requirement. I suggest you to learn and study more about LLM model and Dependency Extraction, that's totally different.. I hope you understand it, thank you

LLM models, that process only make statement like chatGPT because based on OpenAI. If you want to want more, you should checkout at Langchain or another LLM Framework websites..

Shiyun-W commented 8 months ago

OK, Thank you very much!