Closed gmdl007 closed 1 month ago
To load multiple HTML files and parse them for both unstructured text and structured tables, you can use the HTMLTagReader
and UnstructuredElementNodeParser
classes from the LlamaIndex framework. Here's a method to achieve this:
HTMLTagReader
to read the HTML files and extract text from specific tags.UnstructuredElementNodeParser
to handle mixed text and tables effectively.Here's a complete example:
from pathlib import Path
from llama_index.readers.file.html.base import HTMLTagReader
from llama_index.core.node_parser.relational.unstructured_element import UnstructuredElementNodeParser
from llama_index.core.schema import Document
def load_html_directory(directory_path: str, tag: str = "section", ignore_no_id: bool = False):
html_reader = HTMLTagReader(tag=tag, ignore_no_id=ignore_no_id)
node_parser = UnstructuredElementNodeParser()
documents = []
directory = Path(directory_path)
for html_file in directory.glob("*.html"):
docs = html_reader.load_data(html_file)
for doc in docs:
document = Document(text=doc)
nodes = node_parser.get_nodes_from_documents([document])
documents.extend(nodes)
return documents
# Example usage
directory_path = "/path/to/html/files"
documents = load_html_directory(directory_path, tag="section", ignore_no_id=False)
# Now `documents` contains the parsed content from all HTML files in the directory
This method will load and parse all HTML files in the specified directory, extracting both unstructured text and structured tables for optimal retrieval and query results [1][2][3][4].
To continue talking to Dosu, mention @dosu.
Got some errors:
ValidationError Traceback (most recent call last) Cell In[82], line 23 21 # Example usage 22 directory_path = "./ html/" ---> 23 documents = load_html_directory(directory_path, tag="section", ignore_no_id=False)
Cell In[82], line 15, in load_html_directory(directory_path, tag, ignore_no_id) 13 docs = html_reader.load_data(html_file) 14 for doc in docs: ---> 15 document = Document(text=doc) 16 nodes = node_parser.get_nodes_from_documents([document]) 17 documents.extend(nodes)
File ~/miniforge3/envs/new2/lib/python3.11/site-packages/pydantic/v1/main.py:341, in BaseModel.init(pydantic_self__, **data) 339 values, fields_set, validation_error = validate_model(pydantic_self.class, data) 340 if validation_error: --> 341 raise validation_error 342 try: 343 object_setattr(__pydantic_self, 'dict', values)
ValidationError: 1 validation error for Document text str type expected (type=type_error.str)
From: dosubot[bot] @.> Date: Tuesday, June 25, 2024 at 4:10 PM To: run-llama/llama_index @.> Cc: gmdl007 @.>, Author @.> Subject: Re: [run-llama/llama_index] [Question]: how to load multiple html files to get unstructured and structured table in a html page (Issue #14380)
To load multiple HTML files and parse them for both unstructured text and structured tables, you can use the HTMLTagReader and UnstructuredElementNodeParser classes from the LlamaIndex framework. Here's a method to achieve this:
Here's a complete example:
from pathlib import Path
from llama_index.readers.file.html.base import HTMLTagReader
from llama_index.core.node_parser.relational.unstructured_element import UnstructuredElementNodeParser
from llama_index.core.schema import Document
def load_html_directory(directory_path: str, tag: str = "section", ignore_no_id: bool = False):
html_reader = HTMLTagReader(tag=tag, ignore_no_id=ignore_no_id)
node_parser = UnstructuredElementNodeParser()
documents = []
directory = Path(directory_path)
for html_file in directory.glob("*.html"):
docs = html_reader.load_data(html_file)
for doc in docs:
document = Document(text=doc)
nodes = node_parser.get_nodes_from_documents([document])
documents.extend(nodes)
return documents
directory_path = "/path/to/html/files"
documents = load_html_directory(directory_path, tag="section", ignore_no_id=False)
documents
contains the parsed content from all HTML files in the directoryThis method will load and parse all HTML files in the specified directory, extracting both unstructured text and structured tables for optimal retrieval and query results [1]https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/readers/file/html_reader.py[2]https://github.com/run-llama/llama_index/blob/main/llama-index-integrations/readers/llama-index-readers-file/llama_index/readers/file/html/base.py[3]https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/tests/node_parser/test_unstructured.py[4]https://github.com/run-llama/llama_index/blob/main/llama-index-core/tests/node_parser/test_unstructured.py.
To continue talking to Dosuhttps://dosu.dev, mention @dosuhttps://github.com/dosu.
— Reply to this email directly, view it on GitHubhttps://github.com/run-llama/llama_index/issues/14380#issuecomment-2189071176, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AG2FHXHUPQ4F62JLDBBFQV3ZJF257AVCNFSM6AAAAABJ35VKHKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOBZGA3TCMJXGY. You are receiving this because you authored the thread.Message ID: @.***>
The error occurs because the Document
class expects a string for the text
attribute, but it is receiving a Document
object instead. You need to pass the text
attribute of the Document
object to the Document
constructor. Here is the corrected code:
from pathlib import Path
from llama_index.readers.file.html.base import HTMLTagReader
from llama_index.core.node_parser.relational.unstructured_element import UnstructuredElementNodeParser
from llama_index.core.schema import Document
def load_html_directory(directory_path: str, tag: str = "section", ignore_no_id: bool = False):
html_reader = HTMLTagReader(tag=tag, ignore_no_id=ignore_no_id)
node_parser = UnstructuredElementNodeParser()
documents = []
directory = Path(directory_path)
for html_file in directory.glob("*.html"):
docs = html_reader.load_data(html_file)
for doc in docs:
document = Document(text=doc.text, metadata=doc.metadata) # Pass the text and metadata attributes
nodes = node_parser.get_nodes_from_documents([document])
documents.extend(nodes)
return documents
# Example usage
directory_path = "/path/to/html/files"
documents = load_html_directory(directory_path, tag="section", ignore_no_id=False)
# Now `documents` contains the parsed content from all HTML files in the directory
This change ensures that the Document
class receives the text
attribute as a string, which should resolve the ValidationError
[1].
To continue talking to Dosu, mention @dosu.
Great, it works very nice. Can you also advise some code for the actual RAG, i.e. query config? Best
From: dosubot[bot] @.> Date: Tuesday, June 25, 2024 at 4:28 PM To: run-llama/llama_index @.> Cc: gmdl007 @.>, Author @.> Subject: Re: [run-llama/llama_index] [Question]: how to load multiple html files to get unstructured and structured table in a html page (Issue #14380)
The error occurs because the Document class expects a string for the text attribute, but it is receiving a Document object instead. You need to pass the text attribute of the Document object to the Document constructor. Here is the corrected code:
from pathlib import Path
from llama_index.readers.file.html.base import HTMLTagReader
from llama_index.core.node_parser.relational.unstructured_element import UnstructuredElementNodeParser
from llama_index.core.schema import Document
def load_html_directory(directory_path: str, tag: str = "section", ignore_no_id: bool = False):
html_reader = HTMLTagReader(tag=tag, ignore_no_id=ignore_no_id)
node_parser = UnstructuredElementNodeParser()
documents = []
directory = Path(directory_path)
for html_file in directory.glob("*.html"):
docs = html_reader.load_data(html_file)
for doc in docs:
document = Document(text=doc.text, metadata=doc.metadata) # Pass the text and metadata attributes
nodes = node_parser.get_nodes_from_documents([document])
documents.extend(nodes)
return documents
directory_path = "/path/to/html/files"
documents = load_html_directory(directory_path, tag="section", ignore_no_id=False)
documents
contains the parsed content from all HTML files in the directoryThis change ensures that the Document class receives the text attribute as a string, which should resolve the ValidationError [1]https://github.com/run-llama/llama_index/blob/main/docs/docs/examples/discover_llamaindex/document_management/discord_dumps/help_channel_dump_05_25_23.json.
To continue talking to Dosuhttps://dosu.dev, mention @dosuhttps://github.com/dosu.
— Reply to this email directly, view it on GitHubhttps://github.com/run-llama/llama_index/issues/14380#issuecomment-2189119563, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AG2FHXGWLVUBVSAVBVYBDBLZJF5ALAVCNFSM6AAAAABJ35VKHKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOBZGEYTSNJWGM. You are receiving this because you authored the thread.Message ID: @.***>
To set up the query configuration for a Retrieval-Augmented Generation (RAG) system using the LlamaIndex framework, you need to follow several stages: loading, indexing, storing, querying, and evaluation. Below is a code example that demonstrates how to configure the querying stage, which is crucial for a RAG system.
from llama_index.core import Document, VectorStoreIndex, SimpleDirectoryReader
from llama_index.core.retrievers import VectorIndexRetriever
from llama_index.core.node_parser import SentenceSplitter
from llama_index.core.ingestion import IngestionPipeline
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
from llama_index.core.query_engine import CustomQueryEngine
from llama_index.core.response_synthesizers import BaseSynthesizer
from llama_index.core import get_response_synthesizer
from pathlib import Path
# Step 1: Load your data into Documents
loader = SimpleDirectoryReader(input_dir="./data")
documents = loader.load_data()
# Step 2: Create the pipeline with transformations
pipeline = IngestionPipeline(
transformations=[
SentenceSplitter(),
]
)
nodes = pipeline.run(documents=documents, show_progress=True)
# Step 3: Index your data
embed_model = OpenAIEmbedding()
index = VectorStoreIndex(nodes=nodes, embed_model=embed_model)
# Step 4: Configure the retriever
retriever = VectorIndexRetriever(index=index, similarity_top_k=10)
# Step 5: Define a Custom Query Engine
class RAGQueryEngine(CustomQueryEngine):
"""RAG Query Engine."""
retriever: BaseRetriever
response_synthesizer: BaseSynthesizer
def custom_query(self, query_str: str):
nodes = self.retriever.retrieve(query_str)
response_obj = self.response_synthesizer.synthesize(query_str, nodes)
return response_obj
# Step 6: Configure the Query Engine
synthesizer = get_response_synthesizer(response_mode="compact")
query_engine = RAGQueryEngine(
retriever=retriever, response_synthesizer=synthesizer
)
# Step 7: Query the index
query = "How tall is the smallest penguin?"
response = query_engine.query(query)
print(response)
Document
objects using SimpleDirectoryReader
.SentenceSplitter
.VectorStoreIndex
and add your documents to it.VectorIndexRetriever
to fetch relevant context from the index.RAGQueryEngine
that performs retrieval and synthesis.This code provides a basic setup for querying in a RAG system using the LlamaIndex framework. You can customize each component (retriever, query engine) based on your specific requirements [1][2][3][4].
To continue talking to Dosu, mention @dosu.
how does this differ from previous example of loading html docs? does this config also parse the html document with tables and unstructured data the same?
Sent from my iPhone
On 27. Jun 2024, at 08:54, dosubot[bot] @.***> wrote:
To set up the query configuration for a Retrieval-Augmented Generation (RAG) system using the LlamaIndex framework, you need to follow several stages: loading, indexing, storing, querying, and evaluation. Below is a code example that demonstrates how to configure the querying stage, which is crucial for a RAG system.
Code Example for Query Configuration in RAG
from llama_index.core import Document, VectorStoreIndex, SimpleDirectoryReader from llama_index.core.retrievers import VectorIndexRetriever from llama_index.core.node_parser import SentenceSplitter from llama_index.core.ingestion import IngestionPipeline from llama_index.embeddings.openai import OpenAIEmbedding from llama_index.llms.openai import OpenAI from llama_index.core.query_engine import CustomQueryEngine from llama_index.core.response_synthesizers import BaseSynthesizer from llama_index.core import get_response_synthesizer from pathlib import Path
loader = SimpleDirectoryReader(input_dir="./data") documents = loader.load_data()
pipeline = IngestionPipeline( transformations=[ SentenceSplitter(), ] ) nodes = pipeline.run(documents=documents, show_progress=True)
embed_model = OpenAIEmbedding() index = VectorStoreIndex(nodes=nodes, embed_model=embed_model)
retriever = VectorIndexRetriever(index=index, similarity_top_k=10)
class RAGQueryEngine(CustomQueryEngine): """RAG Query Engine."""
retriever: BaseRetriever
response_synthesizer: BaseSynthesizer
def custom_query(self, query_str: str):
nodes = self.retriever.retrieve(query_str)
response_obj = self.response_synthesizer.synthesize(query_str, nodes)
return response_obj
synthesizer = get_response_synthesizer(response_mode="compact") query_engine = RAGQueryEngine( retriever=retriever, response_synthesizer=synthesizer )
query = "How tall is the smallest penguin?" response = query_engine.query(query)
print(response)
Explanation
This code provides a basic setup for querying in a RAG system using the LlamaIndex framework. You can customize each component (retriever, query engine) based on your specific requirements [1]https://github.com/run-llama/llama_index/blob/main/docs/docs/getting_started/concepts.md[2]https://github.com/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-self-rag/examples/self_rag.ipynb[3]https://github.com/run-llama/llama_index/blob/main/llama-index-networks/examples/demo/contributor-1/contributor_1/app_query_engine.py[4]https://github.com/run-llama/llama_index/blob/main/docs/docs/examples/query_engine/custom_query_engine.ipynb.
To continue talking to Dosuhttps://dosu.dev, mention @dosuhttps://github.com/dosu.
— Reply to this email directly, view it on GitHubhttps://github.com/run-llama/llama_index/issues/14380#issuecomment-2193934172, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AG2FHXCL2374U5U2BSMWLATZJOZINAVCNFSM6AAAAABJ35VKHKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOJTHEZTIMJXGI. You are receiving this because you authored the thread.Message ID: @.***>
The query configuration for a Retrieval-Augmented Generation (RAG) system using the LlamaIndex framework does not inherently differ in its ability to parse HTML documents with tables and unstructured data compared to the previous example of loading HTML documents. Both configurations can handle unstructured documents, including HTML, and can parse complex documents with tables and unstructured data using LlamaParse, the proprietary document parser provided by LlamaIndex.
Supporting evidence from the document:
Additionally, the new document provides an example of how to use the Unstructured library to parse out tables and non-table elements from HTML documents, and then use LlamaIndex's recursive retrieval to index and retrieve these elements as needed. This further confirms that the RAG system configuration can parse HTML documents with tables and unstructured data in the same way as the previous example.
Supporting evidence from the new document:
Thus, the RAG system configuration also parses HTML documents with tables and unstructured data in the same way as the previous example [1][2].
To continue talking to Dosu, mention @dosu.
Question Validation
Question
hi people,
i checked this example for the 10k analysis for tesla, i think this is what i need for retrieve the information from bunch of html files:
https://docs.llamaindex.ai/en/stable/examples/query_engine/sec_tables/tesla_10q_table/
My files looks like this:
https://www.cisco.com/c/en/us/td/docs/iosxr/cisco8000/segment-routing/24xx/configuration/guide/b-segment-routing-cg-cisco8000-24xx/configuring-segment-routing-for-ospf-protocol.html or https://www.cisco.com/c/en/us/td/docs/iosxr/cisco8000/segment-routing/24xx/configuration/guide/b-segment-routing-cg-cisco8000-24xx/configuring-segment-routing-over-ipv6-srv6-micro-sids.html
as you can see these are typical configuration files with tables, text mixed. My question is how can i load a whole bunch of such html files automatically maybe like simple directory reader?
You can also suggest a way of parsing such html files to get best retrieval and query result...
Best