Closed lintool closed 4 years ago
Pulling this in from Slack:
Looking at all the RDD filters, they're all basically the same implementation; there's a field, do this custom filter on it. So, a DF and RDD re-implementation could be very similar. Basically what you proposed, the filter UDF taking in two parameters. So, we could do something like this for both RDD and DF:
.filter($"col".isInUrlPatterns(Set(".*index.*".r)))
...and, if we play our cards right, we could just have one implementation for both :man_shrugging:
we could just have one implementation for both
That would be great in the short term, but not necessary for the long term, IMO. Eventually, the DF functionality would be a superset of the RDD functionality, since we have no intention of backporting new DF features to RDD.
Seems like it would be helpful to have the a -> Bool tests regardless and these could be implemented in the existing .keep functions if that's desired.
Filter and FilterNot (does scala have FilterNot?) are more canonical in both Python and Scala.
Also using filter suits FAAV.
Ryan...
On Tuesday, February 11, 2020, Jimmy Lin notifications@github.com wrote:
we could just have one implementation for both
That would be great in the short term, but not necessary for the long term, IMO. Eventually, the DF functionality would be a superset of the RDD functionality, since we have no intention of backporting new DF features to RDD.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/archivesunleashed/aut/issues/425?email_source=notifications&email_token=AAA3D46CZUBF52CNQJB2K33RCKWCDA5CNFSM4KTBIGSKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELMQXWI#issuecomment-584649689, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAA3D47RRM2JMG4TAMNFER3RCKWCDANCNFSM4KTBIGSA .
-- Ryan Deschamps ryan.deschamps@gmail.com @ryandeschamps ryan.deschamps@gmail.com
Thinking about this more, I'm not seeing the use of moving in this direction since it appears to be a slightly more complicated version of just using filter
.
For example:
import io.archivesunleashed._
RecordLoader.loadArchives("/home/nruest/Projects/au/sample-data/geocities",sc)
.webpages()
.count()
res0: Long = 125579
import io.archivesunleashed._
val languages = Set("th","de","ht")
RecordLoader.loadArchives("/home/nruest/Projects/au/sample-data/geocities",sc)
.webpages()
.keepLanguagesDF(languages)
.count()
res1: Long = 3536
import io.archivesunleashed._
val languages = Set("th","de","ht")
RecordLoader.loadArchives("/home/nruest/Projects/au/sample-data/geocities",sc)
.webpages()
.filter($"language".isinCollection(languages))
.count()
res7: Long = 3536
With that, I'd argue we keep what we have now, or remove all the DataFrame filters as they exist now in DataFrames, and resolve this issue by updating the current
documentation with the pure Spark DF implementation of filters.
...and if we go with the latter, that'll solve a sizable chunk of the Python implementation :smiley:
Can I propose yet another alternative design?
import io.archivesunleashed._
RecordLoader.loadArchives("/home/nruest/Projects/au/sample-data/geocities",sc)
.webpages()
.filter(hasLanguage("th","de","ht"))
.count()
This saves the scholar from having to know about the schema explicitly? The UDF should be able to figure it out...
And similarly, we can have:
import io.archivesunleashed._
RecordLoader.loadArchives("/home/nruest/Projects/au/sample-data/geocities",sc)
.webpages()
.filter(urlMatches("""regex"""))
.count()
I'm just thinking that a humanities scholar might get confused/scared about the $
notation and would be unfamiliar with the .
method notation being applied with weird $
thingys?
Yeah, I like that better @lintool. Then we should be able get the negation with !
in Scala and ~
in Python/PySpark.
keepImagesDF
-> hasImages
keepHttpStatusDF
-> hasHTTPStatus
keepDateDF
-> hasDates
keepUrlsDF
-> hasUrls
keepDomainsDF
-> hasDomains
keepMimeTypesTikaDF
-> hasTikaMimeTypes
keepMimeTypesDF
-> hasMimeTypes
keepContentDF
-> contentMatches
keepUrlPatternsDF
-> urlMatches
keepLanguagesDF
-> hasLanguages
Feel free to suggest better names if you have them.
RecordLoader.loadArchives("/home/nruest/Projects/au/sample-data/geocities",sc)
.webpages()
.filter(hasLanguage("th","de","ht"))
.count()
In this approach, hasLanguage() function would require the column as well. Sth like -
filter(hasLanguage($"content",["th","de","ht"]))
Is this fine?
Yeah, that makes sense to me. Work for you @lintool and @ianmilligan1?
Works for me @ruebot and @SinghGursimran!
Currently, we're doing something like this in DFs:
This is a straightforward translation of what we've been doing in RDDs, so that's fine. However, in DF, something like this would be more fluent:
This would require reimplementation of our all filters... let's discuss.