Closed calee88 closed 4 years ago
So far, there's only the list of examples in the README with a short description which data is extracted. In addition, every example will show a command-line help if called with option --help
.
What do you exactly need?
Let me know what you need! You may also ask for help and support on the Common Crawl forum.
@sebastian-nagel Thank you for the reply.
The first one you mentioned is what I imagined when I wrote the issue. The second option is great, if you have time. The third option seems too much for this repository.
Here is my story of struggle, and it is still going on. You may skip reading this part. I am using Ubuntu 18.04.3 LTS. What I want to achieve is to extract a monolingual text from Common Crawl. I started from the command-line help of cc_index_word_count.py. I needed to search to find the path to Common Crawl index table and I also figured out that the optional query argument is not optional. I also need to change the default Java version. Those were fine. Then, I got an error about s3: No FileSystem for scheme: s3. So, I searched for the internet and found that I need packages for that so I added --packages com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.7.2 to the command, and changed the path to s3a://... Now, it complained about AWS credentials, even though I "aws configure"d. My solution was to export AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.
So, this is the current bash script:
#!/bin/bash
export AWS_ACCESS_KEY_ID=***
export AWS_SECRET_ACCESS_KEY=***
query="SELECT url, warc_filename, warc_record_offset, warc_record_length FROM ccindex LIMIT 10"
$SPARK_HOME/bin/spark-submit \
--conf spark.hadoop.parquet.enable.dictionary=true \
--conf spark.hadoop.parquet.enable.summary-metadata=false \
--conf spark.sql.hive.metastorePartitionPruning=true \
--conf spark.sql.parquet.filterPushdown=true \
--conf spark.sql.parquet.mergeSchema=true \
--conf spark.dynamicAllocation.maxExecutors=1 \
--packages com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.7.2 \
--executor-cores 1 \
--num-executors 1 \
cc_index_word_count.py --query "${query}" \
--num_input_partitions 1 \
--num_output_partitions 1 \
s3a://commoncrawl/cc-index/table/cc-main/warc/ word_count
And I'm getting org.apache.http.conn.ConnectionPoolTimeoutException. I tried to limit executors (somebody on the internet suggested it), but it doesn't work as I expected. The exception is happening at "df = spark.read.load(table_path)" line of sparkcc.py.
Thank you for reading!
Hi @calee88, thanks for the careful report. I've opened #13 and #14 to improve documentation and command-line help.
When querying the columnar index (--query
): the data is located in the AWS us-east-1 region (Northern Virginia). It can be accessed remotely but this requires a reliable and fast internet connection. In case you have own an AWS account, there are two options two avoid timeouts:
--csv
Let me know whether this works for you!
Thank you for the reply @sebastian-nagel! I'm using a reliable and fast internet, although I'm far from Northern Virginia. I think the internet should not be a problem here. Have you tried to access remotely using the script I posted? Were you successful? Anyway, I'm going to try Athena or AWS as you suggested.
Hello @sebastian-nagel. I am now able to query using Athena and use the csv file for the script. I still cannot use query argument, but let me close this as my original issue is summarized on #13 .
Thanks, @calee88, for the feedback. #13 will get addressed soon. Yes, I'm able to run the script
/opt/spark/2.4.4/bin/spark-submit --executor-cores 1 --num-executors 1 \
--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.3 \
./cc_index_word_count.py \
--query "SELECT url, warc_filename, warc_record_offset, warc_record_length FROM ccindex WHERE crawl = 'CC-MAIN-2019-51' AND subset = 'warc' AND url_host_tld = 'is' LIMIT 10" \
s3a://commoncrawl/cc-index/table/cc-main/warc/ ccindexwordcount
LIMIT
: without further restrictions in the WHERE
part it seems to look into every part of the table which currently contains 200 billion rows. That's why I've put extra restrictions: only Dec 2019, only the "warc" subset and only Icelandic sites.df = sqlContext.read.parquet("spark-warehouse/ccindexwordcount")
for row in df.sort(df.val.desc()).take(10): print("%6i\t%6i\t%s" % (row['val']['tf'], row['val']['df'], row['key']))
...
245 8 hd
154 10 the
97 8 movies
76 10 of
71 8 2019
69 10 and
64 10 to
62 10 online
62 2 football
61 10 free
for row in df.filter(df['key'].contains('ð')).take(10): print("%6i\t%6i\t%s" % (row['val']['tf'], row['val']['df'], row['key']))
...
1 1 tónleikaferð
1 1 annað
1 1 aðalmynd
2 2 sláðu
6 2 vönduð
1 1 viðeigandi
5 2 iðnó
2 2 með
2 2 lesið
3 2 jólastuði
Thank you for the reply @sebastian-nagel. Athena seems much faster, so I'll just keep using it. I hope someone find this thread helpful.
It would have been helpful, if there were some command examples for each .py files. Or am I not finding those? For now, I need to read every line of codes to understand the examples. Still, I appreciate the examples, it would be much harder without the examples.