Share a brief history of your project and organization
Common Crawl maintains a free, open repository of web crawl data that can be used by anyone.
Primary training corpus in every LLM.82% of raw tokens used to train GPT-3.Free and open corpus since 2007.Cited in over 8000 research papers.3–5 billion new pages added each month.
Is this project associated with other projects/ecosystem stakeholders?
Yes
If answered yes, what are the other projects/ecosystem stakeholders
Common Crawl maintains a free, open repository of web crawl data that can be used by anyone.
Primary training corpus in every LLM.82% of raw tokens used to train GPT-3.Free and open corpus since 2007.Cited in over 8000 research papers.3–5 billion new pages added each month.
Where was the data currently stored in this dataset sourced from
AWS Cloud
If you answered "Other" in the previous question, enter the details here
No response
If you are a data preparer. What is your location (Country/Region)
United States
If you are a data preparer, how will the data be prepared? Please include tooling used and technical details?
We use a script to package the files originally stored in the nginx file server into tar files. Each tar file is controlled to be around 17-30G. Finally, the tar file package is converted into a car file. After the conversion is completed, a record of the car file and The metadata of the source file information is stored in our local system for later query.
If you are not preparing the data, who will prepare the data? (Provide name and business)
No response
Has this dataset been stored on the Filecoin network before? If so, please explain and make the case why you would like to store this dataset again to the network. Provide details on preparation and/or SP distribution.
This website has a lot of data, as far as I know, no one has systematically stored all the data on the Filecoin network.
Please share a sample of the data
https://commoncrawl.org/
Confirm that this is a public dataset that can be retrieved by anyone on the Network
[X] I confirm
If you chose not to confirm, what was the reason
No response
What is the expected retrieval frequency for this data
Monthly
For how long do you plan to keep this dataset stored on Filecoin
2 to 3 years
In which geographies do you plan on making storage deals
Greater China, Asia other than Greater China, North America, Europe
How will you be distributing your data to storage providers
Cloud storage (i.e. S3), HTTP or FTP server, IPFS, Shipping hard drives, Lotus built-in data transfer
How did you find your storage providers
Slack, Filmine, Partners
If you answered "Others" in the previous question, what is the tool or platform you used
No response
Please list the provider IDs and location of the storage providers you will be working with.
Version
1
DataCap Applicant
Commoncrawl
Project ID
IPFSTT
Data Owner Name
Commoncrawl
Data Owner Country/Region
United States
Data Owner Industry
Life Science / Healthcare
Website
https://commoncrawl.org/
Social Media Handle
https://commoncrawl.org/
Social Media Type
Slack
What is your role related to the dataset
Data Preparer
Total amount of DataCap being requested
12
Unit for total amount of DataCap being requested
PiB
Expected size of single dataset (one copy)
1.2
Unit for expected size of single dataset
PiB
Number of replicas to store
10
Weekly allocation of DataCap requested
1
Unit for weekly allocation of DataCap requested
PiB
On-chain address for first allocation
f1ryahi4jcdw6ople2rgnf42mprk6qpvskhgqun5i
Data Type of Application
Slingshot
Custom multisig
Identifier
No response
Share a brief history of your project and organization
Is this project associated with other projects/ecosystem stakeholders?
Yes
If answered yes, what are the other projects/ecosystem stakeholders
Describe the data being stored onto Filecoin
Where was the data currently stored in this dataset sourced from
AWS Cloud
If you answered "Other" in the previous question, enter the details here
No response
If you are a data preparer. What is your location (Country/Region)
United States
If you are a data preparer, how will the data be prepared? Please include tooling used and technical details?
If you are not preparing the data, who will prepare the data? (Provide name and business)
No response
Has this dataset been stored on the Filecoin network before? If so, please explain and make the case why you would like to store this dataset again to the network. Provide details on preparation and/or SP distribution.
Please share a sample of the data
Confirm that this is a public dataset that can be retrieved by anyone on the Network
If you chose not to confirm, what was the reason
No response
What is the expected retrieval frequency for this data
Monthly
For how long do you plan to keep this dataset stored on Filecoin
2 to 3 years
In which geographies do you plan on making storage deals
Greater China, Asia other than Greater China, North America, Europe
How will you be distributing your data to storage providers
Cloud storage (i.e. S3), HTTP or FTP server, IPFS, Shipping hard drives, Lotus built-in data transfer
How did you find your storage providers
Slack, Filmine, Partners
If you answered "Others" in the previous question, what is the tool or platform you used
No response
Please list the provider IDs and location of the storage providers you will be working with.
How do you plan to make deals to your storage providers
Boost client, Lotus client, Bidbot
If you answered "Others/custom tool" in the previous question, enter the details here
No response
Can you confirm that you will follow the Fil+ guideline
Yes