Closed datalove2 closed 1 year ago
Thanks for your request! Everything looks good. :ok_hand:
A Governance Team member will review the information provided and contact you back pretty soon.
Based on the discussion at this link and the decisions from T&T calls a few months ago, we do not permit merged datacap requests.
@datalove2, you're welcome to request these datasets individually. However, given that these datasets have been stored on the Filecoin network several times already, it might be challenging to garner support.
We encourage you to contribute meaningfully to the Filecoin network instead of solely seeking to acquire datacap.
I am not satisfied with the explanation given in the last LDN by this client here:
https://github.com/filecoin-project/filecoin-plus-large-datasets/issues/2050#issuecomment-1689352475
Although this LDN is closed the applicant blames his SP for wrongdoing. This is a fairytale.
We all know that the Data preparer sends out the deal. This unique deal ID can only match with the original content of the data preparer. This is the whole meaning of Filecoin , immutability.
I propose to close this application and new ones until a proper explanation is given. Fraud brings harm to the community and this behaviour must be stopped.
Any update here?
Any update here?
@Sunnyiscoming RG mentioned that merging datasets cannot be submitted as a project anymore. However, the AI project is a distinct category and does not fall under the dataset merging scope. Additionally, we closed the original application and re-applied based on comments from the DCENT team during the Notary Node meeting. Despite providing responses, they continue to be persistent in their queries. The data we store is from publicly available datasets, and this storage is not considered meaningless for the network. Furthermore, we have resubmitted the certification form for your review.
Please list information of sps here.
@Sunnyiscoming
I am not satisfied with the explanation given in the last LDN by this client here:
Although this LDN is closed the applicant blames his SP for wrongdoing. This is a fairytale.
We all know that the Data preparer sends out the deal. This unique deal ID can only match with the original content of the data preparer. This is the whole meaning of Filecoin , immutability.
I propose to close this application and new ones until a proper explanation is given. Fraud brings harm to the community and this behaviour must be stopped.
Reminder.
https://github.com/filecoin-project/filecoin-plus-large-datasets/issues/2169#issuecomment-1705020103 Please explain why the AI project is a distinct category and does not fall under the dataset merging scope. Perhaps you need to provide a more convincing reason for the previous problem.
Any update here?
It will be closed in 3 days if there is no reply here.
@Sunnyiscoming I apologize for the delayed response.
The community discourages applications for "merged data sets" because combining a large number of unrelated data into a single large file can result in messy, difficult-to-use uploads that offer limited value. However, what we are applying for is a category of data labeled "natural language processing," which falls under the same category and serves the same purpose.
I cannot support your application. There is clear rule in the community. Maybe you can apply for other open dataset.
This application has not seen any responses in the last 10 days. This issue will be marked with Stale label and will be closed in 4 days. Comment if you want to keep this application open.
-- Commented by Stale Bot.
Data Owner Name
RongYin
What is your role related to the dataset
Data Preparer
Data Owner Country/Region
China
Data Owner Industry
IT & Technology Services
Website
https://www.qcc.com/firm/3380acbb3101bd58394d1ba4be51e877.html
Social Media
Total amount of DataCap being requested
7PiB
Expected size of single dataset (one copy)
1.5PiB
Number of replicas to store
10
Weekly allocation of DataCap requested
1PiB
On-chain address for first allocation
f15waefbgzgzjq2wlb3cqcttgfsmkldfrctiwf2jq
Data Type of Application
Public, Open Dataset (Research/Non-Profit)
Custom multisig
Identifier
No response
Share a brief history of your project and organization
Is this project associated with other projects/ecosystem stakeholders?
No
If answered yes, what are the other projects/ecosystem stakeholders
No response
Describe the data being stored onto Filecoin
Where was the data currently stored in this dataset sourced from
AWS Cloud
If you answered "Other" in the previous question, enter the details here
No response
If you are a data preparer. What is your location (Country/Region)
Hong Kong
If you are a data preparer, how will the data be prepared? Please include tooling used and technical details?
If you are not preparing the data, who will prepare the data? (Provide name and business)
No response
Has this dataset been stored on the Filecoin network before? If so, please explain and make the case why you would like to store this dataset again to the network. Provide details on preparation and/or SP distribution.
Please share a sample of the data
Confirm that this is a public dataset that can be retrieved by anyone on the Network
If you chose not to confirm, what was the reason
No response
What is the expected retrieval frequency for this data
Yearly
For how long do you plan to keep this dataset stored on Filecoin
1.5 to 2 years
In which geographies do you plan on making storage deals
Greater China, Asia other than Greater China, North America, Europe
How will you be distributing your data to storage providers
Cloud storage (i.e. S3), HTTP or FTP server, IPFS, Shipping hard drives
How do you plan to choose storage providers
Slack, Big Data Exchange, Partners
If you answered "Others" in the previous question, what is the tool or platform you plan to use
No response
If you already have a list of storage providers to work with, fill out their names and provider IDs below
How do you plan to make deals to your storage providers
Boost client, Lotus client
If you answered "Others/custom tool" in the previous question, enter the details here
No response
Can you confirm that you will follow the Fil+ guideline
Yes