Open hash889900 opened 1 month ago
https://compliance.allocator.tech/report/f03014732/1722224202/report.md
Report updated again, retrieval rates are steadily improving
@Kevin-FF-USA @filecoin-watchdog @galen-mcandrew I see that the list is out, so can you ask what's wrong here?
https://compliance.allocator.tech/report/f03014732/1722434626/report.md Report updated again,retrieval rates are steadily improving
https://compliance.allocator.tech/report/f03014732/1722567568/report.md Report updated again But it looks like this report has the wrong retrieval rate
Here's bot report from the repo. https://check.allocator.tech/report/hash889900/HashTeam/issues/3/1722567088482.md
https://check.allocator.tech/report/hash889900/HashTeam/issues/8/1722567163946.md
The latest report so far, the customer explained that due to the new version update, there was a little problem with boostd connectivity, the fault lasted for a few days, so it caused the retrieval rate to drop, and it has been fixed now.You can see that the retrieval rate of the new SPs added by the client has been increasing.You can see that retrieval rates are continuing to improve https://check.allocator.tech/report/hash889900/HashTeam/issues/8/1723444429395.md
Here are the 24-hour retrieval rates for the five additional SPs
Based on an additional compliance review, this allocator is currently working with public open dataset clients. Additionally, the data associated with this pathway is starting to be able to be retrieved at scale, and testing for retrieval is currently increasing. We are also seeing diligence and interventions from the allocator to the client.
This allocator is currently mostly working with a single client, which appears to be using some VPN-hosted SPs. According the allocator's application, they would require some additional details in the event that a client is working with these kinds of SPs (question 28). Can you please provide some evidence of your diligence into this client's SPs?
As a reminder, the allocator team is responsible for verifying, supporting, and intervening with their clients. If a client is NOT providing accurate deal-making info (such as incomplete or inaccurate SP details) or making deals with noncompliant unretrievable SPs, then the allocator needs to intervene and require client updates before more DataCap should be awarded.
We also hoping to increase the allocator's role in supporting consistent data preparation, especially for public open datasets. There have been some conversations about increasing diligence, such as more comprehensive questions. In what ways are you investigating your clients' data preparation and distribution?
Depending on replies to the above and any additional details from the allocator, we would like to request an additional 5PiB of DataCap.
Can you please provide some evidence of your diligence into this client's SPs?
Here are some comments as well as screenshots https://github.com/hash889900/HashTeam/issues/3#issuecomment-2196681610
https://github.com/hash889900/HashTeam/issues/8#issuecomment-2219721349
We also hoping to increase the allocator's role in supporting consistent data preparation, especially for public open datasets. There have been some conversations about increasing diligence, such as https://github.com/filecoin-project/Allocator-Governance/issues/125. In what ways are you investigating your clients' data preparation and distribution?
will adopt part of the questions posed in #125 to investigate client data preparation and distribution.
@galen-mcandrew thanks
Review of Allocations from HashTeam Allocator Application: https://github.com/filecoin-project/notary-governance/issues/1050
First example: DataCap was given to: https://github.com/hash889900/HashTeam/issues/3
First: 256TiB • Second:512TiB • Third: 1PiB • Fourth:2PiB Due diligence was conducted before each signing. Pay close attention to Spark data, SP disclosure, etc.Screenshot below
SP disclosure: f02866666 /ip4/103.160.100.187/tcp/6666 Tokyo f03144777 /ip4/103.160.100.187/tcp/44777 Tokyo f02820000 /ip4/14.215.165.46/tcp/15890 GuangZhou f02639655 /ip4/113.96.22.206/tcp/6789 GuangZhou f02588263 /ip4/103.201.24.43/tcp/5232 Hong Kong
Actual data storage report: https://check.allocator.tech/report/hash889900/HashTeam/issues/3/1721384980892.md
All support spark retrieval and the overall retrieval rate has been increasing
Second example
DataCap was given to: https://github.com/hash889900/HashTeam/issues/8
First: 256TiB Due diligence was conducted before each signing. Pay close attention to Spark data, SP disclosure, etc.Screenshot below
SP disclosure: f01930832 /dns4/1f4952i942.iok.la/tcp/24001 (112.86.232.91) Nanjin f01390330 /ip4/113.132.177.9/tcp/24001 Xi an f02899169 /ip4/113.132.177.9/tcp/25001 Xi an f01888808 /ip4/86.123.188.55/tcp/25001 Romania
Actual data storage report: https://check.allocator.tech/report/hash889900/HashTeam/issues/8/1721289192197.md
If you have any questions or suggestions, please leave a message, thank you!