filecoin-project / filecoin-plus-large-datasets

Hub for client applications for DataCap at a large scale
109 stars 62 forks source link

MongoStorage - CommonCrawl Archive #2040

Open amughal opened 1 year ago

amughal commented 1 year ago

Data Owner Name

Common Crawl

What is your role related to the dataset

Data Preparer

Data Owner Country/Region

United States

Data Owner Industry

Not-for-Profit

Website

https://commoncrawl.org/

Social Media

None.

Total amount of DataCap being requested

10PiB

Expected size of single dataset (one copy)

1PiB

Number of replicas to store

10

Weekly allocation of DataCap requested

300TiB

On-chain address for first allocation

f1b4u4eclr63rjz2wqbnlso75vs5p5qp4rdmj45ai

Data Type of Application

Public, Open Dataset (Research/Non-Profit)

Custom multisig

Identifier

No response

Share a brief history of your project and organization

MongoStorage is an emerging FileCoin Service Provider. Based in Southern California, USA, and working through a plan. MongoStorage is a FIL Green GOLD Certified and currently working through to be fully ESPA certified provider. The founders have vast experience in networks and systems, and have gone through multiple sessions, presentation at ESPA and featured in the Zero to One Service Provider Twitter session by Protocol Labs. 
We are working as Data Prep in the Slingshot Moonlanding Program and making the most useful data available on the FileCoin network.

Is this project associated with other projects/ecosystem stakeholders?

Yes

If answered yes, what are the other projects/ecosystem stakeholders

Working with BigDataExchange
SlingShot Moonlanding V3

Describe the data being stored onto Filecoin

The Common Crawl project is a corpus of web crawl data composed of over 50 billion web pages.
Following 10 datasets has been crawled and being prepared.

s3://commoncrawl/crawl-data/CC-MAIN-2022-40 – September/October 2022
s3://commoncrawl/crawl-data/CC-MAIN-2023-14 – March/April 2023
s3://commoncrawl/crawl-data/CC-MAIN-2023-06 – January/February 2023
s3://commoncrawl/crawl-data/CC-MAIN-2020-40 – September 2020
s3://commoncrawl/crawl-data/CC-MAIN-2020-45 – October 2020
s3://commoncrawl/crawl-data/CC-MAIN-2021-39 – September 2021
s3://commoncrawl/crawl-data/CC-MAIN-2021-49 – November/December 2021
s3://commoncrawl/crawl-data/CC-MAIN-2022-05 – January 2022
s3://commoncrawl/crawl-data/CC-MAIN-2022-21 – May 2022
s3://commoncrawl/crawl-data/CC-MAIN-2022-27 – June/July 2022

Where was the data currently stored in this dataset sourced from

AWS Cloud

If you answered "Other" in the previous question, enter the details here

No response

How do you plan to prepare the dataset

singularity

If you answered "other/custom tool" in the previous question, enter the details here

No response

Please share a sample of the data

Follow is a sample for one of the dataset. This lists different directory structures, files are in ZIP format, with individual files listed in the list files.

File List   #Files  Total Size
Compressed (TiB)
Segments    CC-MAIN-2021-49/segment.paths.gz    100 
WARC files  CC-MAIN-2021-49/warc.paths.gz   64000   68.66
WAT files   CC-MAIN-2021-49/wat.paths.gz    64000   16.66
WET files   CC-MAIN-2021-49/wet.paths.gz    64000   7.18
Robots.txt files    CC-MAIN-2021-49/robotstxt.paths.gz  64000   0.15
Non-200 responses files CC-MAIN-2021-49/non200responses.paths.gz    64000   2.29
URL index files CC-MAIN-2021-49/cc-index.paths.gz   302 0.2

The Common Crawl URL Index for this crawl is available at: https://index.commoncrawl.org/CC-MAIN-2021-49/. Also the columnar index has been updated to contain this crawl.

Confirm that this is a public dataset that can be retrieved by anyone on the Network

If you chose not to confirm, what was the reason

No response

What is the expected retrieval frequency for this data

Sporadic

For how long do you plan to keep this dataset stored on Filecoin

More than 3 years

In which geographies do you plan on making storage deals

Greater China, Asia other than Greater China, Africa, North America, South America, Europe, Australia (continent), Antarctica

How will you be distributing your data to storage providers

HTTP or FTP server

How do you plan to choose storage providers

Slack, Big Data Exchange, Partners

If you answered "Others" in the previous question, what is the tool or platform you plan to use

No response

If you already have a list of storage providers to work with, fill out their names and provider IDs below

Providers through BigDataExchange
Providers through Aligned
Providers through Slack

How do you plan to make deals to your storage providers

Boost client, Lotus client, Singularity

If you answered "Others/custom tool" in the previous question, enter the details here

No response

Can you confirm that you will follow the Fil+ guideline

Yes

large-datacap-requests[bot] commented 1 year ago

Thanks for your request!

Heads up, you’re requesting more than the typical weekly onboarding rate of DataCap!
large-datacap-requests[bot] commented 1 year ago

Thanks for your request! Everything looks good. :ok_hand:

A Governance Team member will review the information provided and contact you back pretty soon.

Sunnyiscoming commented 1 year ago

Whether the data of this application overlaps with the data of the previous application?

amughal commented 1 year ago

Question: Is a small subset of data sharing allowed (two datasets out of 10)? If not, then I will make sure datasets will not overlap.

Sunnyiscoming commented 1 year ago

A small subset of data sharing is not allowed.

amughal commented 1 year ago

Ok understood, thanks. Will make sure dataset is not shared among the previous approval and for this one.

Sunnyiscoming commented 1 year ago

Datacap Request Trigger

Total DataCap requested

10 PiB

Expected weekly DataCap usage rate

300 TiB

Client address

f1b4u4eclr63rjz2wqbnlso75vs5p5qp4rdmj45ai

large-datacap-requests[bot] commented 1 year ago

DataCap Allocation requested

Multisig Notary address

f02049625

Client address

f1b4u4eclr63rjz2wqbnlso75vs5p5qp4rdmj45ai

DataCap allocation requested

150TiB

Id

5d4cc1ae-b938-4f1b-a423-f1262658bbdf

jamerduhgamer commented 1 year ago

Request Proposed

Your Datacap Allocation Request has been proposed by the Notary

Message sent to Filecoin Network

bafy2bzaceaazbte5g6p75rjuz5uhdripbqchcc6zhq35wolbdkw5q2ywekbjw

Address

f1b4u4eclr63rjz2wqbnlso75vs5p5qp4rdmj45ai

Datacap Allocated

150.00TiB

Signer Address

f1kqdiokoeubyse4qpihf7yrpl7czx4qgupx3eyzi

Id

5d4cc1ae-b938-4f1b-a423-f1262658bbdf

You can check the status of the message here: https://filfox.info/en/message/bafy2bzaceaazbte5g6p75rjuz5uhdripbqchcc6zhq35wolbdkw5q2ywekbjw

jamerduhgamer commented 1 year ago

Approved the first tranche of datacap because Mongo Storage is a reputable SP that has gone through the ESPA program and the dataset is a public dataset and will be retrievable and stored around the world.

zcfil commented 1 year ago

After reading history, I have some questions:

I looked at the data sample.How will you allocate this data, which cities do you plan to store data in, and which data storage vendors do you currently have with you. Please list the SPs you are pre-collaborating with and the regions. I look forward to hearing from you.

amughal commented 1 year ago

Hello @zcfil . Please see replies below:

  1. This will be a distributed allocation. I work a lot with CB at BigDataExchange and others. My previous LDN approvals are both on the West Coast and in the East Cost in US.
  2. I myself act as a DataPrep and participate in the Slingshot-v3/Moonlanding program. I do have my own large storage for the data (across many large JBODs (Dell, HPE, DDN and few others) and Dell/SuperMicro servers. These are used for downloading the datasets through a 5Gbps internet link. And then for the CAR generation using Singularity.
  3. Current SPs which I'm collaborating are from BDE. I'm not sure if I can put their Miner IDs in public here, I need to ask CB at BDE. Here is my SP Miner ID 'f01959735' which is not part of this request, but putting as a reference. Thanks
amughal commented 1 year ago

These two additional Miner IDs [BDE]: f01967469, f01717477

@zcfil Please let me know if you have any further questions?

zcfil commented 1 year ago

You can use your official domain name to filplus-app-review@fil.org Send email and copy to reymond.bu@gmail.com To confirm your identity? The email name should include question ID #2040

amughal commented 1 year ago

@zcfil I have just sent the email. Thanks

zcfil commented 1 year ago

Gmail is not authoritative,If you have any communication results, please feel free to reply at any time. @Sunnyiscoming May I ask if this is a validated LDN

amughal commented 1 year ago

@zcfil I have sent you another email with screenshot.

amughal commented 1 year ago

@zcfil I have sent you another email from my official email. Let me know what else is needed?

amughal commented 1 year ago

Hi @Sunnyiscoming, @zcfil is waiting for your input. Thanks

xinaxu commented 1 year ago

@amughal did you contact Common Crawl directly? They mentioned to us that getting data from their AWS is extremely expensive for them and will likely provide you with a direct link from their HTTP server.

Kevin-FF-USA commented 1 year ago

Confirming on behalf of @Sunnyiscoming - email received and confirmed @zcfil

amughal commented 1 year ago

@xinaxu This is a very valid concern and I have seen issues downloading from AWS last year. Since then, I have used their HTTP servers. In the moonlanding channel, others complained for slowness and many time not to able to retrieve data. I informed Caro and in the channel about http and it worked perfectly. Thanks for bringing this discussion.

kernelogic commented 1 year ago

Request Approved

Your Datacap Allocation Request has been approved by the Notary

Message sent to Filecoin Network

bafy2bzacedwnnnseafbrhvcjiypqcn66sptanrwhbkgsobxrqz3veo3qbnzza

Address

f1b4u4eclr63rjz2wqbnlso75vs5p5qp4rdmj45ai

Datacap Allocated

150.00TiB

Signer Address

f1yjhnsoga2ccnepb7t3p3ov5fzom3syhsuinxexa

Id

5d4cc1ae-b938-4f1b-a423-f1262658bbdf

You can check the status of the message here: https://filfox.info/en/message/bafy2bzacedwnnnseafbrhvcjiypqcn66sptanrwhbkgsobxrqz3veo3qbnzza

kernelogic commented 1 year ago

Approved with clarifications from T&T team. Also I agree HTTP is the way to go, I had to crawl their website and get the direct HTTP download link. You cannot download from s3 bucket anonymously for this dataset.

amughal commented 1 year ago

Thank you @kernelogic. The datacap allocation of 150TB is just the first tranche? We are hoping to start slow, but achieve 1PiB sealing per week. Would that be an issue?

Sunnyiscoming commented 1 year ago

image Received that.

kernelogic commented 1 year ago

@amughal yes it is the first tranche. Each subsequent tranche is 200% of the previous one so next top up will be 300T.

large-datacap-requests[bot] commented 1 year ago

DataCap Allocation requested

Request number 2

Multisig Notary address

f02049625

Client address

f1b4u4eclr63rjz2wqbnlso75vs5p5qp4rdmj45ai

DataCap allocation requested

300TiB

Id

2ae97ea4-d5af-4a93-9c1c-1a0c76742ce1

large-datacap-requests[bot] commented 1 year ago

Stats & Info for DataCap Allocation

Multisig Notary address

f02049625

Client address

f1b4u4eclr63rjz2wqbnlso75vs5p5qp4rdmj45ai

Rule to calculate the allocation request amount

100% of weekly dc amount requested

DataCap allocation requested

300TiB

Total DataCap granted for client so far

150TiB

Datacap to be granted to reach the total amount requested by the client (10PiB)

9.85PiB

Stats

Number of deals Number of storage providers Previous DC Allocated Top provider Remaining DC
3593 1 150TiB 100 35.57TiB
amughal commented 1 year ago

Hello @kernelogic . We have actively started sealing at high rate using the SaaS service provider around 25-30TiB a day. Seems like based on the current usage this application requires signatures. Requesting your sign off asap? This would be helpful to continue sealing during the holidays.

amughal commented 1 year ago

@kernelogic @Kevin-FF-USA @jamerduhgamer @zcfil @xinaxu Hello Notaries, I need quick approval for the next tranche, any help is appreciated. Thank you Azher

spaceT9 commented 1 year ago

checker:manualTrigger

filplus-checker-app[bot] commented 1 year ago

DataCap and CID Checker Report Summary[^1]

Retrieval Statistics

⚠️ All retrieval success ratios are below 1%.

Storage Provider Distribution

⚠️ 1 storage providers sealed more than 70% of total datacap - f02181705: 100.00%

⚠️ All storage providers are located in the same region.

Deal Data Replication

⚠️ 100.00% of deals are for data replicated across less than 3 storage providers.

Deal Data Shared with other Clients[^3]

✔️ No CID sharing has been observed.

[^1]: To manually trigger this report, add a comment with text checker:manualTrigger

[^2]: Deals from those addresses are combined into this report as they are specified with checker:manualTrigger

[^3]: To manually trigger this report with deals from other related addresses, add a comment with text checker:manualTrigger <other_address_1> <other_address_2> ...

Full report

Click here to view the CID Checker report. Click here to view the Retrieval report.

spaceT9 commented 1 year ago

All retrieval success ratios are 0, you need to figure a way to improve retrieval success rate before signing.

Fenbushi-Filecoin commented 1 year ago

These two additional Miner IDs [BDE]: f01967469, f01717477

@zcfil Please let me know if you have any further questions?

Based on the report, the Miner's ID seemed to be different from the one you mentioned above. And also there was only one Miners involved. Any reasons or explanations?

amughal commented 1 year ago

Hello @Fenbushi-Filecoin, The goal is to distribute 10 copies. I started with the active miner first "f02181705", more miners will be sealing gradually starting with the second tranche. Thanks

cryptowhizzard commented 1 year ago

@Fenbushi-Filecoin

Does it make a difference then?

lotus net connect f01967469 f01967469 -> {12D3KooWAtFaj2fcUzeVFx8NYsZAwnY1Nu6QyhdwswRYTBtHccPR: [/ip4/64.238.214.36/tcp/24006]} connect 12D3KooWAtFaj2fcUzeVFx8NYsZAwnY1Nu6QyhdwswRYTBtHccPR: success root@coinlisthhw:~# lotus net connect f01717477 f01717477 -> {12D3KooWAtFaj2fcUzeVFx8NYsZAwnY1Nu6QyhdwswRYTBtHccPR: [/ip4/64.238.214.36/tcp/24006]} connect 12D3KooWAtFaj2fcUzeVFx8NYsZAwnY1Nu6QyhdwswRYTBtHccPR: success

They are all sharing the same IP subnet, no geographic distribution.

@amughal we need some explanations now please.

amughal commented 1 year ago

Hi @cryptowhizzard @Fenbushi-Filecoin

| @amughal we need some explanations now please.

To clarify, currently only SP f02181705 is sealing.

In total, we are looking to host up to 10 copies of the dataset.

SPs (f01967469, f01717477) initially showed desire to host the data; they will be re-evaluated before data is provided.

Explanation of setup, sealing process & retrievability

Aligned has stepped in to seal this dataset as SaaS sealing provider on behalf of SPs and has agreed to ensure a hot copy is available.

Current SP sealing (f02181705) is in Montreal, Canada, while Aligned as SaaS sealing provider is located in Ohio, USA. Optimized sealing is done by temporarily hosting Boost node with Aligned in Ohio, thus IP address (64.85.173.194) shown is Aligned in Ohio, the actual location of miner and long-term storage is at (38.122.231.60). You can check the IP addresses using whois or traceroute for accurate GEO location.

Once sealing is finished (1PiB), Boost node will be switched back in the same DC as the lotus-miner in Canada. There was no issue in using the libp2p IP address from Montreal, but for the sake of transparency, we have provided the IP address of Ohio where the Boost is currently hosted.

The next SP expected to start sealing a copy of this dataset is (f02181704) located in Las Vegas. To avoid future confusion on libp2p announced address (same IP at Ohio vs Las Vegas), we can use the IP address from Las Vegas, if notaries prefer?

Please let me know if you have further questions. I can get you in touch with Aligned or SP if needed.

Thank you

cryptowhizzard commented 1 year ago

Ok, that explains. However it does not explain that there is no retrieval?

We have worked with Aligned ourselves, never had retrieval issues?

amughal commented 1 year ago

checker:manualTrigger

filplus-checker-app[bot] commented 1 year ago

DataCap and CID Checker Report Summary[^1]

Retrieval Statistics

Storage Provider Distribution

⚠️ 1 storage providers sealed more than 70% of total datacap - f02181705: 100.00%

⚠️ All storage providers are located in the same region.

Deal Data Replication

⚠️ 100.00% of deals are for data replicated across less than 3 storage providers.

Deal Data Shared with other Clients[^3]

✔️ No CID sharing has been observed.

[^1]: To manually trigger this report, add a comment with text checker:manualTrigger

[^2]: Deals from those addresses are combined into this report as they are specified with checker:manualTrigger

[^3]: To manually trigger this report with deals from other related addresses, add a comment with text checker:manualTrigger <other_address_1> <other_address_2> ...

Full report

Click here to view the CID Checker report. Click here to view the Retrieval report.

amughal commented 1 year ago

checker:manualTrigger

filplus-checker-app[bot] commented 1 year ago

DataCap and CID Checker Report Summary[^1]

Retrieval Statistics

Storage Provider Distribution

⚠️ 1 storage providers sealed more than 70% of total datacap - f02181705: 100.00%

⚠️ All storage providers are located in the same region.

Deal Data Replication

⚠️ 100.00% of deals are for data replicated across less than 3 storage providers.

Deal Data Shared with other Clients[^3]

✔️ No CID sharing has been observed.

[^1]: To manually trigger this report, add a comment with text checker:manualTrigger

[^2]: Deals from those addresses are combined into this report as they are specified with checker:manualTrigger

[^3]: To manually trigger this report with deals from other related addresses, add a comment with text checker:manualTrigger <other_address_1> <other_address_2> ...

Full report

Click here to view the CID Checker report. Click here to view the Retrieval report.

amughal commented 1 year ago

checker:manualTrigger

filplus-checker-app[bot] commented 1 year ago

DataCap and CID Checker Report Summary[^1]

Retrieval Statistics

Storage Provider Distribution

⚠️ 1 storage providers sealed more than 70% of total datacap - f02181705: 100.00%

⚠️ All storage providers are located in the same region.

Deal Data Replication

⚠️ 100.00% of deals are for data replicated across less than 3 storage providers.

Deal Data Shared with other Clients[^3]

✔️ No CID sharing has been observed.

[^1]: To manually trigger this report, add a comment with text checker:manualTrigger

[^2]: Deals from those addresses are combined into this report as they are specified with checker:manualTrigger

[^3]: To manually trigger this report with deals from other related addresses, add a comment with text checker:manualTrigger <other_address_1> <other_address_2> ...

Full report

Click here to view the CID Checker report. Click here to view the Retrieval report.

amughal commented 1 year ago

checker:manualTrigger

filplus-checker-app[bot] commented 1 year ago

DataCap and CID Checker Report Summary[^1]

Retrieval Statistics

Storage Provider Distribution

⚠️ 1 storage providers sealed more than 70% of total datacap - f02181705: 100.00%

⚠️ All storage providers are located in the same region.

Deal Data Replication

⚠️ 100.00% of deals are for data replicated across less than 3 storage providers.

Deal Data Shared with other Clients[^3]

✔️ No CID sharing has been observed.

[^1]: To manually trigger this report, add a comment with text checker:manualTrigger

[^2]: Deals from those addresses are combined into this report as they are specified with checker:manualTrigger

[^3]: To manually trigger this report with deals from other related addresses, add a comment with text checker:manualTrigger <other_address_1> <other_address_2> ...

Full report

Click here to view the CID Checker report. Click here to view the Retrieval report.

amughal commented 1 year ago

Ok, that explains. However it does not explain that there is no retrieval?

We have worked with Aligned ourselves, never had retrieval issues?

Hi @cryptowhizzard @Fenbushi-Filecoin

We have fixed configuration in the Boost to allow for retrievals. I ran the BOT in the last few days, statistical sampling has been increasing gradually. Though, it seems that checker : manual Trigger BOT requires another week to show increased retrieval percentage.

Can we get the next tranche approved please?

Thanks

herrehesse commented 1 year ago

@amughal Good that you are working on retrieval! Looking forward to your data distribution.

amughal commented 1 year ago

@amughal Good that you are working on retrieval! Looking forward to your data distribution.

Hi @herrehesse, thanks for getting back. Second Miner in LasVegas is standby waiting for the tranche to be approved. Goal for the next tranche is to distribute deals among both miners. Third SP in the EastCoast will be ready this week as well and data distribution will further improve.