Open dportabella opened 7 years ago
This is not a use case we've considered thus far. Wouldn't be too hard to implement - loadArchive
ultimately calls a Hadoop InputFormat
to read ARCs and WARCs. We would need a corresponding Hadoop OutputFormat
to implement the converse functionality. saveAsWarcArchive
would then call this OutputFormat
.
Just re-pinging this to keep it alive. I think I have a good use case for this too. Now to find time..
@lintool, with loadArchive
returns a RDD[ArchiveRecord]
, so at this point we have lost the information on the request and response headers (except from url, date and mime type), right?
Of course, if we don't care about that headers, we can create a new archive with a dummy request and response headers. That would be ok for my current use case.
Hi, I've created a gist to filter a WARC archive using Spark and storing the result back to a WARC archive: https://gist.github.com/dportabella/3caf261c218a4448a03a14dbc06fe730
I did not created a sophisticated Spark writer/serializer, but it does the job. If you are interested I can integrate this code to your warcbase project.
We need to process a WARC archive, filter it based on keywords, and create a WARC archive. Something like this:
(saving request and response)
Is this possible with warc-base? if not, any idea on how to achieve it?