Open amughal opened 1 year ago
Thanks for your request!
Heads up, you’re requesting more than the typical weekly onboarding rate of DataCap!
Thanks for your request! Everything looks good. :ok_hand:
A Governance Team member will review the information provided and contact you back pretty soon.
Whether the data of this application overlaps with the data of the previous application?
Question: Is a small subset of data sharing allowed (two datasets out of 10)? If not, then I will make sure datasets will not overlap.
A small subset of data sharing is not allowed.
Ok understood, thanks. Will make sure dataset is not shared among the previous approval and for this one.
Total DataCap requested
10 PiB
Expected weekly DataCap usage rate
300 TiB
Client address
f1b4u4eclr63rjz2wqbnlso75vs5p5qp4rdmj45ai
f02049625
f1b4u4eclr63rjz2wqbnlso75vs5p5qp4rdmj45ai
150TiB
5d4cc1ae-b938-4f1b-a423-f1262658bbdf
Your Datacap Allocation Request has been proposed by the Notary
bafy2bzaceaazbte5g6p75rjuz5uhdripbqchcc6zhq35wolbdkw5q2ywekbjw
Address
f1b4u4eclr63rjz2wqbnlso75vs5p5qp4rdmj45ai
Datacap Allocated
150.00TiB
Signer Address
f1kqdiokoeubyse4qpihf7yrpl7czx4qgupx3eyzi
Id
5d4cc1ae-b938-4f1b-a423-f1262658bbdf
You can check the status of the message here: https://filfox.info/en/message/bafy2bzaceaazbte5g6p75rjuz5uhdripbqchcc6zhq35wolbdkw5q2ywekbjw
Approved the first tranche of datacap because Mongo Storage is a reputable SP that has gone through the ESPA program and the dataset is a public dataset and will be retrievable and stored around the world.
After reading history, I have some questions:
I looked at the data sample.How will you allocate this data, which cities do you plan to store data in, and which data storage vendors do you currently have with you. Please list the SPs you are pre-collaborating with and the regions. I look forward to hearing from you.
Hello @zcfil . Please see replies below:
These two additional Miner IDs [BDE]: f01967469, f01717477
@zcfil Please let me know if you have any further questions?
You can use your official domain name to filplus-app-review@fil.org Send email and copy to reymond.bu@gmail.com To confirm your identity? The email name should include question ID #2040
@zcfil I have just sent the email. Thanks
Gmail is not authoritative,If you have any communication results, please feel free to reply at any time. @Sunnyiscoming May I ask if this is a validated LDN
@zcfil I have sent you another email with screenshot.
@zcfil I have sent you another email from my official email. Let me know what else is needed?
Hi @Sunnyiscoming, @zcfil is waiting for your input. Thanks
@amughal did you contact Common Crawl directly? They mentioned to us that getting data from their AWS is extremely expensive for them and will likely provide you with a direct link from their HTTP server.
Confirming on behalf of @Sunnyiscoming - email received and confirmed @zcfil
@xinaxu This is a very valid concern and I have seen issues downloading from AWS last year. Since then, I have used their HTTP servers. In the moonlanding channel, others complained for slowness and many time not to able to retrieve data. I informed Caro and in the channel about http and it worked perfectly. Thanks for bringing this discussion.
Your Datacap Allocation Request has been approved by the Notary
bafy2bzacedwnnnseafbrhvcjiypqcn66sptanrwhbkgsobxrqz3veo3qbnzza
Address
f1b4u4eclr63rjz2wqbnlso75vs5p5qp4rdmj45ai
Datacap Allocated
150.00TiB
Signer Address
f1yjhnsoga2ccnepb7t3p3ov5fzom3syhsuinxexa
Id
5d4cc1ae-b938-4f1b-a423-f1262658bbdf
You can check the status of the message here: https://filfox.info/en/message/bafy2bzacedwnnnseafbrhvcjiypqcn66sptanrwhbkgsobxrqz3veo3qbnzza
Approved with clarifications from T&T team. Also I agree HTTP is the way to go, I had to crawl their website and get the direct HTTP download link. You cannot download from s3 bucket anonymously for this dataset.
Thank you @kernelogic. The datacap allocation of 150TB is just the first tranche? We are hoping to start slow, but achieve 1PiB sealing per week. Would that be an issue?
Received that.
@amughal yes it is the first tranche. Each subsequent tranche is 200% of the previous one so next top up will be 300T.
f02049625
f1b4u4eclr63rjz2wqbnlso75vs5p5qp4rdmj45ai
300TiB
2ae97ea4-d5af-4a93-9c1c-1a0c76742ce1
f02049625
f1b4u4eclr63rjz2wqbnlso75vs5p5qp4rdmj45ai
100% of weekly dc amount requested
300TiB
150TiB
9.85PiB
Number of deals | Number of storage providers | Previous DC Allocated | Top provider | Remaining DC |
---|---|---|---|---|
3593 | 1 | 150TiB | 100 | 35.57TiB |
Hello @kernelogic . We have actively started sealing at high rate using the SaaS service provider around 25-30TiB a day. Seems like based on the current usage this application requires signatures. Requesting your sign off asap? This would be helpful to continue sealing during the holidays.
@kernelogic @Kevin-FF-USA @jamerduhgamer @zcfil @xinaxu Hello Notaries, I need quick approval for the next tranche, any help is appreciated. Thank you Azher
checker:manualTrigger
⚠️ All retrieval success ratios are below 1%.
⚠️ 1 storage providers sealed more than 70% of total datacap - f02181705: 100.00%
⚠️ All storage providers are located in the same region.
⚠️ 100.00% of deals are for data replicated across less than 3 storage providers.
✔️ No CID sharing has been observed.
[^1]: To manually trigger this report, add a comment with text checker:manualTrigger
[^2]: Deals from those addresses are combined into this report as they are specified with checker:manualTrigger
[^3]: To manually trigger this report with deals from other related addresses, add a comment with text checker:manualTrigger <other_address_1> <other_address_2> ...
Click here to view the CID Checker report. Click here to view the Retrieval report.
All retrieval success ratios are 0, you need to figure a way to improve retrieval success rate before signing.
These two additional Miner IDs [BDE]: f01967469, f01717477
@zcfil Please let me know if you have any further questions?
Based on the report, the Miner's ID seemed to be different from the one you mentioned above. And also there was only one Miners involved. Any reasons or explanations?
Hello @Fenbushi-Filecoin, The goal is to distribute 10 copies. I started with the active miner first "f02181705", more miners will be sealing gradually starting with the second tranche. Thanks
@Fenbushi-Filecoin
Does it make a difference then?
lotus net connect f01967469 f01967469 -> {12D3KooWAtFaj2fcUzeVFx8NYsZAwnY1Nu6QyhdwswRYTBtHccPR: [/ip4/64.238.214.36/tcp/24006]} connect 12D3KooWAtFaj2fcUzeVFx8NYsZAwnY1Nu6QyhdwswRYTBtHccPR: success root@coinlisthhw:~# lotus net connect f01717477 f01717477 -> {12D3KooWAtFaj2fcUzeVFx8NYsZAwnY1Nu6QyhdwswRYTBtHccPR: [/ip4/64.238.214.36/tcp/24006]} connect 12D3KooWAtFaj2fcUzeVFx8NYsZAwnY1Nu6QyhdwswRYTBtHccPR: success
They are all sharing the same IP subnet, no geographic distribution.
@amughal we need some explanations now please.
Hi @cryptowhizzard @Fenbushi-Filecoin
| @amughal we need some explanations now please.
To clarify, currently only SP f02181705 is sealing.
In total, we are looking to host up to 10 copies of the dataset.
SPs (f01967469, f01717477) initially showed desire to host the data; they will be re-evaluated before data is provided.
Explanation of setup, sealing process & retrievability
Aligned has stepped in to seal this dataset as SaaS sealing provider on behalf of SPs and has agreed to ensure a hot copy is available.
Current SP sealing (f02181705) is in Montreal, Canada, while Aligned as SaaS sealing provider is located in Ohio, USA. Optimized sealing is done by temporarily hosting Boost node with Aligned in Ohio, thus IP address (64.85.173.194) shown is Aligned in Ohio, the actual location of miner and long-term storage is at (38.122.231.60). You can check the IP addresses using whois or traceroute for accurate GEO location.
Once sealing is finished (1PiB), Boost node will be switched back in the same DC as the lotus-miner in Canada. There was no issue in using the libp2p IP address from Montreal, but for the sake of transparency, we have provided the IP address of Ohio where the Boost is currently hosted.
The next SP expected to start sealing a copy of this dataset is (f02181704) located in Las Vegas. To avoid future confusion on libp2p announced address (same IP at Ohio vs Las Vegas), we can use the IP address from Las Vegas, if notaries prefer?
Please let me know if you have further questions. I can get you in touch with Aligned or SP if needed.
Thank you
Ok, that explains. However it does not explain that there is no retrieval?
We have worked with Aligned ourselves, never had retrieval issues?
checker:manualTrigger
⚠️ 1 storage providers sealed more than 70% of total datacap - f02181705: 100.00%
⚠️ All storage providers are located in the same region.
⚠️ 100.00% of deals are for data replicated across less than 3 storage providers.
✔️ No CID sharing has been observed.
[^1]: To manually trigger this report, add a comment with text checker:manualTrigger
[^2]: Deals from those addresses are combined into this report as they are specified with checker:manualTrigger
[^3]: To manually trigger this report with deals from other related addresses, add a comment with text checker:manualTrigger <other_address_1> <other_address_2> ...
Click here to view the CID Checker report. Click here to view the Retrieval report.
checker:manualTrigger
⚠️ 1 storage providers sealed more than 70% of total datacap - f02181705: 100.00%
⚠️ All storage providers are located in the same region.
⚠️ 100.00% of deals are for data replicated across less than 3 storage providers.
✔️ No CID sharing has been observed.
[^1]: To manually trigger this report, add a comment with text checker:manualTrigger
[^2]: Deals from those addresses are combined into this report as they are specified with checker:manualTrigger
[^3]: To manually trigger this report with deals from other related addresses, add a comment with text checker:manualTrigger <other_address_1> <other_address_2> ...
Click here to view the CID Checker report. Click here to view the Retrieval report.
checker:manualTrigger
⚠️ 1 storage providers sealed more than 70% of total datacap - f02181705: 100.00%
⚠️ All storage providers are located in the same region.
⚠️ 100.00% of deals are for data replicated across less than 3 storage providers.
✔️ No CID sharing has been observed.
[^1]: To manually trigger this report, add a comment with text checker:manualTrigger
[^2]: Deals from those addresses are combined into this report as they are specified with checker:manualTrigger
[^3]: To manually trigger this report with deals from other related addresses, add a comment with text checker:manualTrigger <other_address_1> <other_address_2> ...
Click here to view the CID Checker report. Click here to view the Retrieval report.
checker:manualTrigger
⚠️ 1 storage providers sealed more than 70% of total datacap - f02181705: 100.00%
⚠️ All storage providers are located in the same region.
⚠️ 100.00% of deals are for data replicated across less than 3 storage providers.
✔️ No CID sharing has been observed.
[^1]: To manually trigger this report, add a comment with text checker:manualTrigger
[^2]: Deals from those addresses are combined into this report as they are specified with checker:manualTrigger
[^3]: To manually trigger this report with deals from other related addresses, add a comment with text checker:manualTrigger <other_address_1> <other_address_2> ...
Click here to view the CID Checker report. Click here to view the Retrieval report.
Ok, that explains. However it does not explain that there is no retrieval?
We have worked with Aligned ourselves, never had retrieval issues?
Hi @cryptowhizzard @Fenbushi-Filecoin
We have fixed configuration in the Boost to allow for retrievals. I ran the BOT in the last few days, statistical sampling has been increasing gradually. Though, it seems that checker : manual Trigger BOT requires another week to show increased retrieval percentage.
Can we get the next tranche approved please?
Thanks
@amughal Good that you are working on retrieval! Looking forward to your data distribution.
@amughal Good that you are working on retrieval! Looking forward to your data distribution.
Hi @herrehesse, thanks for getting back. Second Miner in LasVegas is standby waiting for the tranche to be approved. Goal for the next tranche is to distribute deals among both miners. Third SP in the EastCoast will be ready this week as well and data distribution will further improve.
Data Owner Name
Common Crawl
What is your role related to the dataset
Data Preparer
Data Owner Country/Region
United States
Data Owner Industry
Not-for-Profit
Website
https://commoncrawl.org/
Social Media
Total amount of DataCap being requested
10PiB
Expected size of single dataset (one copy)
1PiB
Number of replicas to store
10
Weekly allocation of DataCap requested
300TiB
On-chain address for first allocation
f1b4u4eclr63rjz2wqbnlso75vs5p5qp4rdmj45ai
Data Type of Application
Public, Open Dataset (Research/Non-Profit)
Custom multisig
Identifier
No response
Share a brief history of your project and organization
Is this project associated with other projects/ecosystem stakeholders?
Yes
If answered yes, what are the other projects/ecosystem stakeholders
Describe the data being stored onto Filecoin
Where was the data currently stored in this dataset sourced from
AWS Cloud
If you answered "Other" in the previous question, enter the details here
No response
How do you plan to prepare the dataset
singularity
If you answered "other/custom tool" in the previous question, enter the details here
No response
Please share a sample of the data
Confirm that this is a public dataset that can be retrieved by anyone on the Network
If you chose not to confirm, what was the reason
No response
What is the expected retrieval frequency for this data
Sporadic
For how long do you plan to keep this dataset stored on Filecoin
More than 3 years
In which geographies do you plan on making storage deals
Greater China, Asia other than Greater China, Africa, North America, South America, Europe, Australia (continent), Antarctica
How will you be distributing your data to storage providers
HTTP or FTP server
How do you plan to choose storage providers
Slack, Big Data Exchange, Partners
If you answered "Others" in the previous question, what is the tool or platform you plan to use
No response
If you already have a list of storage providers to work with, fill out their names and provider IDs below
How do you plan to make deals to your storage providers
Boost client, Lotus client, Singularity
If you answered "Others/custom tool" in the previous question, enter the details here
No response
Can you confirm that you will follow the Fil+ guideline
Yes