filecoin-project / Allocator-Governance

7 stars 26 forks source link

Community Diligence Review of IPFSTT Allocator #93

Closed nicelove666 closed 1 month ago

nicelove666 commented 1 month ago

Review of Allocations from IPFSTT Allocator Application: https://github.com/filecoin-project/notary-governance/issues/1006

  1. IPFSTT has created a due diligence form, in addition to submitting the LDN application form, clients applying need to fill out the IPFSTT due diligence form as well. https://github.com/nicelove666/IPFSTT-Client-Due-Diligence-Form/issues/new/choose
  2. Most of the SPs that IPFSTT collaborates are able to support Spark.
  3. IPFSTT are actively promoting application submissions from enterprise clients, and through the joint efforts of the team, havd received an application from an enterprise-level client.

First example: DataCap was given to: https://github.com/nicelove666/Allocator-Pathway-IPFSTT/issues/27

First: 256TiB%
• Second:512TiB%
• Third: 1PiB
• Fourth:2PiB
• Fiveth: 1PiB
• Sixth:1PiB Due diligence was conducted before each signing. Pay close attention to Spark data, SP disclosure, etc.

SP disclosure: Polaris f02951213 Singapore ShenSuanCloud f03035686 Nanchang, Jiangxi, CN CoffeeCloud f03086293 Hong Kong CoffeeCloud f03136267 Hong Kong Round Arithmetic f02200472 Chengdu, Sichuan, CN Individual f02956383 Hong Kong Lucky Star f03068013 Hong Kong

Actual data storage report: https://github.com/nicelove666/Allocator-Pathway-IPFSTT/issues/27#issuecomment-2227796526

Provider Location Total Deals Sealed Percentage Unique Data Duplicate Deals Mean Spark Retrieval Success Rate 7d f02956383 Hong Kong, Hong Kong, HK ANYUN INTERNET TECHNOLOGY (HK) CO.,LIMITED 630.09 TiB 13.10% 630.06 TiB 0.00% 9.25% f02200472 Chengdu, Sichuan, CN CHINANET SiChuan Telecom Internet Data Center 59.88 TiB 1.24% 59.88 TiB 0.00% 0.63% f03035686 Shenzhen, Guangdong, CN CHINANET-BACKBONE 1.24 PiB 26.35% 1.24 PiB 0.00% 42.14% f03136267 Hong Kong, Hong Kong, HK HK Broadband Network Ltd. 1.09 PiB 23.19% 1.09 PiB 0.00% 12.57% f03144077 Hong Kong, Hong Kong, HK HK Broadband Network Ltd. 651.44 TiB 13.54% 651.44 TiB 0.00% 71.26% f03086293 Hong Kong, Hong Kong, HK HK Broadband Network Ltd. 299.31 TiB 6.22% 299.31 TiB 0.00% 0.00% f03068013 Hong Kong, Hong Kong, HK PCCW Global, Inc. 424.91 TiB 8.83% 424.91 TiB 0.00% 8.78% f02951213 Singapore, Singapore, SG StarHub Ltd 362.28 TiB 7.53% 362.28 TiB 0.00% 68.44%

The client disclosed 7 SPs, but actually collaborated with 8 SPs. added 1 new SP. The SPs are generally well-matched. For the 1 new SP that was added, client have provided detailed information about their geographic location and company on GitHub. https://github.com/nicelove666/Allocator-Pathway-IPFSTT/issues/27#issuecomment-2227935442

All SPs support Spark

WX20240718-120343@2x

Second example

DataCap was given to: https://github.com/nicelove666/Allocator-Pathway-IPFSTT/issues/34

First: 256TiB%
• Second:512TiB%
• Third: 1PiB
• Fourth:1PiB Due diligence was conducted before each signing. Pay close attention to Spark data, SP disclosure, etc.

SP disclosure: f01422327 - Japan f02252023 - Japan f02252024 - Japan f01989013 - Malaysia f01989014 - Malaysia f01989015 - Malaysia f02105010 - Malaysia

Actual data storage report: https://github.com/nicelove666/Allocator-Pathway-IPFSTT/issues/34#issuecomment-2227814765

Provider Location Total Deals Sealed Percentage Unique Data Duplicate Deals Mean Spark Retrieval Success Rate 7d f02105010 Kuala Lumpu Extreme Broadband - Total Broadband Experience 300.00 TiB 17.01% 300.00 TiB 0.00% - f01989015 Kuala Lumpur, MY Extreme Broadband - Total Broadband Experience 300.00 TiB 17.01% 300.00 TiB 0.00% - f01989013 Kuala Lumpur Extreme Broadband - Total Broadband Experience 300.00 TiB 17.01% 300.00 TiB 0.00% - f01989014 Kuala Lumpur, MY Extreme Broadband - Total Broadband Experience 300.00 TiB 17.01% 300.00 TiB 0.00% - f02252024 JP TOKAI Communications Corporation 188.00 TiB 10.66% 188.00 TiB 0.00% - f01422327 JP TOKAI Communications Corporation 188.00 TiB 10.66% 188.00 TiB 0.00% - f02252023 JP TOKAI Communications Corporation 188.00 TiB 10.66% 188.00 TiB 0.00%

All disclosed SP exactly matches the actual cooperative SP.

SP is normal in retrieval.

WX20240719-000326@2x

This is an enterprise customer. We mainly confirm the customer's identity through emails and conference calls. The domain name email has been forwarded to filplus-app-review@fil.org, please review it.

Third example

DataCap was given to: https://github.com/nicelove666/Allocator-Pathway-IPFSTT/issues/41

First: 256TiB Only 256TiB has been approved,the checkbot has not been updated. We will continue to pay attention.

filecoin-watchdog commented 1 month ago

Retrievals still at 0% https://compliance.allocator.tech/report/f03011612/1721396249/report.md @galen-mcandrew

First Diligence review: https://github.com/filecoin-project/Allocator-Governance/issues/9

nicelove666 commented 1 month ago

@filecoin-watchdog First, please state the facts truthfully. If the content you are presenting is deceptive in nature, please remain silent, otherwise this will continuously lower your credit score.

Second, I hope you can explain in detail why you have consecutively signed three 1.95PiB signatures and only issued them to yourself (Dcent). The key point is: your three 1.95PiB signatures do not have any record on GitHub, and were manually signed. WechatIMG1787

荷兰人自我交易证据
filecoin-watchdog commented 1 month ago

I think you have the wrong watchdog, @nicelove666. I'll also take a look at DCent allocator when they are close to renewal, thanks.

nicelove666 commented 1 month ago

Let me provide a detailed and impartial explanation of the IPFSTT allocator.Thank you for taking the time to read @galen-mcandrew.

First, this is the allocation situation for the first round of 5P. https://github.com/filecoin-project/Allocator-Governance/issues/9 Secondly, this is the allocation situation for the second round of 10P. https://github.com/filecoin-project/Allocator-Governance/issues/93#issue-2416066083 Finally, you can see the overall allocation situation here. https://compliance.allocator.tech/report/f03011612/1721396249/report.md

For the first round of 5P, detailed discussions have been made, including communication on Slack https://filecoinproject.slack.com/archives/C06MTBZ44P2/p1718358842699939, GitHub communication https://github.com/filecoin-station/spark/issues/74, and statements made at the notary meetings.So the details of the 5P allocation will not be repeated.Interested friends can look herehttps://github.com/filecoin-project/Allocator-Governance/issues/9.

In the first round of 5P, we made the following commitments:

  1. We will strive to find and support enterprise client applications.
  2. We will strengthen due diligence, including due diligence on applicants on GitHub, email and telephone communication.
  3. We will guide, support and require our clients to collaborate with retrievable SPs to ensure their data is successfully stored and retrieved.

In the second round of 10P, to fulfill these commitments, we have done the following work:

  1. We successfully collaborated with an enterprise client, and I asked them to send emails using a domain email address, which has been forwarded to filplus-app-review@fil.org.please check it @Kevin-FF-USA

  2. To strengthen due diligence, we have created an additional due diligence form, which can help PL and FF better understand the situation. https://github.com/nicelove666/IPFSTT-Client-Due-Diligence-Form/issues/new/choose. At the same time, our signatures strictly follow the rules we wrote when applying for notaries: 256TiB in the first round, 512TiB in the second round, 1PiB in the third round, 2PiB in the fourth round, and no more than 2PiB.

    341370003-184e752f-5ae1-4f2a-b2fa-86cf52f40f90 (1)

    Most importantly, we have conducted detailed due diligence before each round of signing, and if there are any doubts, we will stop signing until the doubts are resolved.

    WX20240722-214611@2x WX20240722-213627@2x WX20240722-213609@2x WX20240722-213344@2x WX20240722-213414@2x WX20240722-213430@2x WX20240722-213505@2x
  3. In terms of retrieval, the SPs we collaborated with have made great progress, and the SP in https://github.com/nicelove666/Allocator-Pathway-IPFSTT/issues/27 has achieved 85%-100% support for Spark.

    WX20240722-214211@2x

Although the Spark statistics for the enterprise client https://github.com/nicelove666/Allocator-Pathway-IPFSTT/issues/34 cannot be counted, we can manually check that the SP supports boost and lassie retrieval, and we can use run_retrieval_test to more clearly and meticulously count the retrieval rate and number of successful retrievals of the SP. In fact, this enterprise customer has done very well. Every SP they cooperate with stores unseal files and supports retrieval. We can count these data in real time.

Therefore, in the 10P allocation of the second round, all our clients support retrieval, with an average retrieval success rate of over 90%.

Finally, this is our overall allocation situation https://compliance.allocator.tech/report/f03011612/1721396249/report.md. We have collaborated with 6 clients and 28 SPs, which are distributed in mainland China, Hong Kong, Japan, Vietnam, Singapore and other regions. Although Spark cannot statistically count all our SPs, we can use technical means to query that these SPs have saved unseal files and can support retrieval. Going forward, we will be more proactive in finding SP solutions that support Spark, strengthen communication with Spark, and explore more enterprise clients. We believe that more and more SPs supporting Spark will emerge!

nicelove666 commented 1 month ago

Finally, let me repeat it again: In the second round of 10P, almost all SPs can support retrieval.

TrueBlood1 commented 1 month ago

The retrieval result shows everything.

Most of the sps mentioned below shows 0% successful retrievals.

Even some of your sps can not be found in spark dashboard.

https://github.com/filecoin-project/Allocator-Governance/issues/93#issuecomment-2239208526

Retrievals still at 0% https://compliance.allocator.tech/report/f03011612/1721396249/report.md @galen-mcandrew

First Diligence review: #9

image

All you said seems to be empty words. There's been no progress since the last time your allocator were allocated 10 PiB.

@galen-mcandrew

nicelove666 commented 1 month ago

@TrueBlood1 1.The link you cited was from a week ago, and the data has now been updated. You need to look at this link: https://compliance.allocator.tech/report/f03011612/1721999568/report.md.

2.Why are you ignoring the SPs that support Spark, and pretending not to see those 50-97% Spark success rates?

WX20240726-224036@2x WX20240726-224023@2x

3.The SPs in your screenshot that do not support Spark are mostly from the first round of 5P. Additionally, there is one from the enterprise customer dangbei, at https://github.com/nicelove666/Allocator-Pathway-IPFSTT/issues/34. These SPs actually store the unsealed files and can support boost and other retrieval methods. The reason for not supporting Spark has been explained by Joss in a meeting - their open-source solution is coming soon.

4.Why should I help Josh and allocate 2.75P of DC to their customers? Because they have brought us enterprise-level customer applications, which is exactly what we need. Meanwhile, our team has verified that their technical solution is indeed feasible. Of course, the ideas of PL and FF take precedence, but regardless of whether PL and FF accept Joss's solution, we have only allocated 2.75P of DC to Josh. The remaining 7.25P is all used for customers who can accept Spark. So in our second round of 10P, we will still have 70% of the quota for SPs that support Spark, and 30% for SPs that support boost but currently do not support Spark.

nicelove666 commented 1 month ago

@TrueBlood1 Finally, I want to tell you that defamation not based on facts is very easily clarified.

A person who does not state the facts cannot have a high credit score. In the long run, people will regard their words as mere air.

The latest data and information will appear first at the notary's meeting. I hope you can fully understand the facts before making any statements.

galen-mcandrew commented 1 month ago

4.Why should I help Joss and allocate 2.75P of DC to their customers? Because they have brought us enterprise-level customer applications, which is exactly what we need. Meanwhile, our team has verified that their technical solution is indeed feasible. Of course, the ideas of PL and FF take precedence, but regardless of whether PL and FF accept Joss's solution, we have only allocated 2.75P of DC to Joss. The remaining 7.25P is all used for customers who can accept Spark. So in our second round of 10P, we will still have 70% of the quota for SPs that support Spark, and 30% for SPs that support boost but currently do not support Spark.

@nicelove666 Can you provide more information here? I want to make sure I understand which "solution" you are referring to in this comment.

nicelove666 commented 1 month ago

Good morning @galen-mcandrew The "josh" I mentioned is joshua-ne, https://github.com/filecoin-project/Allocator-Governance/issues/63. They explained their situation at the notary's meeting. They are currently facing some problem - all their SPs support the retrieval of tools like boost, but their SPs are not being counted on Spark, due to the root cause being that the "index" has not been published, as Spark requires index-based retrieval.

Our team suspects that SPs using other systems like Venus are also unable to be counted on Spark, Maybe this is the reason.

Therefore, the josh team has integrated the retrieval functionality of the v3.1LDN check-bot, and developed a tool on top of that, which is a "command-line" based retrieval tool, built upon systems like boost. Our team has tested their solution and found it to be feasible. Most importantly, we have manually verified that their SPs do support retrieval through various means like boost, so we have decided to help them by allocating 2.75P of DC to them. Of course, we are also grateful that they have brought us enterprise-level customer applications, which is something we have been lacking and have promised to do.

An hour ago, I communicated with their team:

1、They will submit an issue to explain the situation in detail.

2、They need to open-source the tool as soon as possible, and the first version of the test tool will be open-sourced next week for the community to test.

3、They will resolve the indexing issue as soon as possible, and it is expected that they will be able to support Spark after the next network upgrade on August 6th.

Next week, we will be able to see the progress, our team will continue to follow up.Sincerely, have a great day.

willscott commented 1 month ago

I attempted to pull recent deals from several of the SPs listed in #63 and was unable to connect to them with lassie / boost.

e.g.

failed to dial 12D3KooWLENb9RJxGexwFQRbxyBAApuNt3B7hN8Zz69oTCyUJwQr: all dials failed
  * [/ip4/118.140.26.165/tcp/30001] dial tcp4 118.140.26.165:30001: connect: connection refused

You will need to allow others to also run your check-bot independently if it is to be convincing.

nicelove666 commented 1 month ago

Dear, Can you provide the specific SP ID? Additionally, I wholeheartedly agree with your viewpoint, open-source is a must, and they will also publish the indexes quickly to support Spark.

TrueBlood1 commented 1 month ago

@nicelove666 All the data on the image is what I searched that day by checking it out on spark dashboard. I just showing the problems you have. I don't think you should be allocated any more datacap until we wait for your upgrades to be completed. Please finish your upgrade asap. @galen-mcandrew

nicelove666 commented 1 month ago

Cannot be retrieved in Spark are dangbei’s issues, accounting for less than 30%, And it can be solved immediately

TrueBlood1 commented 1 month ago

I've seen all of applications for you and it's a long time that we can not find your change, from the first application to the latest application. As a long-term participant of the allocator, it shouldn't just be sophistry that needs to be progressed.

nicelove666 commented 1 month ago

I will not reply to the biased ones. Let’s communicate directly at the next round of notary meetings

nicelove666 commented 1 month ago

If you are in Dubai, we can also meet to communicate.

TrueBlood1 commented 1 month ago

What you need to do now is to make sure that everyone can retrieve on most of your sps. Stop gaslighting.

nicelove666 commented 1 month ago

@galen-mcandrew @willscott The josh team released a roadmap, https://github.com/joshua-ne/FIL_DC_Allocator_1022/issues/22 https://github.com/joshua-ne/FIL_DC_Allocator_1022/issues/22

Yvette516 commented 1 month ago

Until there are more valid retrieval results to show, it is recommended to reevaluate the refill of this allocator in order not to cause argue or dissatisfaction in Fil+ community.

nicelove666 commented 1 month ago

First, all the sps that josh cooperates with already support spark. From now on, all packaged ones will support it, but the previous ones did not. Second, their solution will be open sourced soon. By then, it will be possible to prove that although the files cannot be retrieved on spark, the files are indeed stored. Third, tomorrow, they will give a detailed explanation at the meeting.

Yvette516 commented 1 month ago

Do not see you or your partner in the meeting. @willscott are their retrieval is successful? i can not get good retrieval result about their sp. @galen-mcandrew

galen-mcandrew commented 1 month ago

Based on a further diligence review, this allocator pathway is in compliance with their application. They are continuing to increase visibility and compliance, while working towards additional scale tools to support the growth of the ecosystem. I will continue to ask all commenters to refrain from antagonistic attacks or claims, and limit these threads to the relevant details. For example, there is no reason to deflect and call into question other allocators.

We are requesting 20PiB of DataCap from RKH for this pathway to increase runway and scale.

@nicelove666 Please reply if there are any issues, concerns, or updates while we initiate the request to the RKH.

nicelove666 commented 3 weeks ago

Yes, we succeeded and you can view the data. @Yvette516