fidlabs / Open-Data-Pathway

3 stars 7 forks source link

[DataCap Application] <Common Crawl > <2024-06-26T12:36:12.600Z> #43

Open martapiekarska opened 4 days ago

martapiekarska commented 4 days ago

Version

2024-06-26T12:36:12.600Z

DataCap Applicant

@lyjmry

Data Owner Name

Common Crawl

Data Owner Country/Region

Not-for-Profit

Website

https://commoncrawl.org

Social Media Handle

https://x.com/commoncrawl

Social Media Type

Twitter

What is your role related to the dataset

Data Preparer

Total amount of DataCap being requested

100PiB

Expected size of single dataset (one copy)

10PiB

Number of replicas to store

10

Weekly allocation of DataCap

2000TiB

On-chain address for first allocation

f1ouxkkeacppvlyx2rjsrx4tuwr7udix6uu47gjiy

Data Type of Application

Public, Open Dataset (Research/Non-Profit)

Identifier

9547

Share a brief history of your project and organization

Project: This project is organized by the CommonCrawl open source organization to regularly crawl network data and save it in the AWS open data set. And study how to clean it into AI training data. They have stored data from 2008 onwards, which is an order of magnitude larger.

About me: I have joined the Filecoin ecosystem since 2019, and have been active in the community since the second half of 2020, and have contributed to filecoin Greater China by providing resource information reports, resource docking, etc.

Is this project associated with other projects/ecosystem stakeholders?

No

If answered yes, what are the other projects/ecosystem stakeholders

Because the data is huge and has AI training value. I very much hope that Fileocin can play its distributed, decentralized storage value. To prevent data loss due to catastrophic events on AWS storage nodes. It is also hoped that Filecoin’s developer ecosystem can also use this data for AI training to expand the value of the data.

Where was the data currently stored in this dataset sourced from

AWS Cloud

If you answered "Other" in the previous question, enter the details here

If you are a data preparer. What is your location (Country/Region)

China

If you are a data preparer, how will the data be prepared? Please include tooling used and technical details?

I will fully communicate with the storage providers, including downloading the data to my local hard disk for offline transactions. If bandwidth resources are sufficient due to long distances, the storage provider will download it from AWS, using including but not limited to boost.

If you are not preparing the data, who will prepare the data? (Provide name and business)

Me

Has this dataset been stored on the Filecoin network before? If so, please explain and make the case why you would like to store this dataset again to the network. Provide details on preparation and/or SP distribution.

N/A

Please share a sample of the data

https://data.commoncrawl.org/crawl-data/CC-NEWS/2016/09/warc.paths.gz https://data.commoncrawl.org/crawl-data/CC-MAIN-2024-18/cc-index-table.paths.gz https://data.commoncrawl.org/crawl-data/CC-MAIN-2024-18/cc-index.paths.gz https://data.commoncrawl.org/crawl-data/CC-MAIN-2024-18/non200responses.paths.gz

Confirm that this is a public dataset that can be retrieved by anyone on the Network

Confirm

If you chose not to confirm, what was the reason

What is the expected retrieval frequency for this data

Yearly

For how long do you plan to keep this dataset stored on Filecoin

Permanently

In which geographies do you plan on making storage deals

Greater China, Asia other than Greater China, North America

How will you be distributing your data to storage providers

Cloud storage (i.e. S3), Shipping hard drives

How did you find your storage providers

Slack, Partners, Others

If you answered "Others" in the previous question, what is the tool or platform you used

Wechat

Please list the provider IDs and location of the storage providers you will be working with.

HONGKONG CHINA Singapore USA As for MInerID, they plan to use new nodes for sealing, I will update then.

How do you plan to make deals to your storage providers

Boost client, Lotus client

If you answered "Others/custom tool" in the previous question, enter the details here

Can you confirm that you will follow the Fil+ guideline

Yes

datacap-bot[bot] commented 4 days ago

Application is waiting for allocator review

datacap-bot[bot] commented 3 days ago

KYC has been requested. Please complete KYC at https://kyc.allocator.tech/?owner=fidlabs&repo=Open-Data-Pathway&client=f1ouxkkeacppvlyx2rjsrx4tuwr7udix6uu47gjiy&issue=43

kevzak commented 3 days ago

I can confirm GitHub KYC was completed using Togggle third party check: https://filplus.storage/api/get-kyc-users

lyjmry commented 3 days ago

Storage provider list f03068013 Hong Kong f03144077 Hong Kong f01928022 Vietnam f03035686 Jiangxi, CN f02951213 Singapore f01850726 Vietnam f03136267 Hong Kong f02841588 Singapore f02953066 USA

kevzak commented 2 days ago

@lyjmry are these SPs enabling retrievals on Spark Dashboard?

https://spacemeridian.grafana.net/public-dashboards/32c03ae0d89748e3b08e0f08121caa14?orgId=1

lyjmry commented 2 days ago

”“It can be found in the index and cid.contact. Lassie and boost searches are also normal, but spark statistics cannot be found. Many people are responding to this problem.”“ Reply from miner that failed to retrieve

lyjmry commented 2 days ago

image

lyjmry commented 2 days ago

a5e990457ce8abab9219443b736a57e