Closed kikakkz closed 3 months ago
@kikakkz can you please complete a KYC check of your GitHub ID at this link? https://filplus.storage/
login with your GitHub ID and click link to complete KYC.
Application is waiting for governance review
Total DataCap requested
1PiB
Expected weekly DataCap usage rate
50TiB
Client address
f1pvixpahnuxfjp73ojo43oax5asoopobyqu4hmea
f1pvixpahnuxfjp73ojo43oax5asoopobyqu4hmea
50TiB
a40d9d68-71fd-4f4d-b05b-f61a68436893
Application is ready to sign
@kikakkz please complete this KYB form with details about the applicant and client: https://form.jotform.com/240786057753667
@kikakkz can you please complete a KYC check of your GitHub ID at this link? https://filplus.storage/
login with your GitHub ID and click link to complete KYC.
I confirm KYC was completed https://github.com/filecoin-project/filecoin-plus-large-datasets/issues/2328#issuecomment-2008408733
For the website please wait, 😄. we're not online formally currently. The testnet is offline temporarily.
For the website please wait, 😄. we're not online formally currently. The testnet is offline temporarily.
https://testnet.web3eye.io is online again, :)
I confirm KYB was submitted and deemed legit
Your Datacap Allocation Request has been approved by the Notary
bafy2bzaceawvoc3qhyzf7bgczh66xoflu5bd4ifk6vronokgw24b2kh45onke
Address
f1pvixpahnuxfjp73ojo43oax5asoopobyqu4hmea
Datacap Allocated
50TiB
Signer Address
f1v24knjbqv5p6qrmfjj5xmlaoddzqnon2oxkzkyq
Id
a40d9d68-71fd-4f4d-b05b-f61a68436893
You can check the status of the message here: https://filfox.info/en/message/bafy2bzaceawvoc3qhyzf7bgczh66xoflu5bd4ifk6vronokgw24b2kh45onke
Application is Granted
Application is in Refill
f1pvixpahnuxfjp73ojo43oax5asoopobyqu4hmea
50TiB
08507fa9-6633-4a19-9231-783a0d4584f5
Application is ready to sign
@kikakkz as part of data sample review needed, please provide access to the ftp for interactive exploration of the data. We cannot currently confirm what data you plan to store.
None of the provided SPs are currently providing indexes of their data to the network, or making data stored with them available for retrieval.
I do not have confidence that this data will be made available for retrieval and cannot recommend providing any subsequent datacap until we can confirm that deals made with the initial datacap indeed is made available for retrieval.
@kikakkz as part of data sample review needed, please provide access to the ftp for interactive exploration of the data. We cannot currently confirm what data you plan to store.
we don't have a ftp for data sample currently due to it's delivered offline. we can prepare ftp access with our data.
None of the provided SPs are currently providing indexes of their data to the network, or making data stored with them available for retrieval.
I do not have confidence that this data will be made available for retrieval and cannot recommend providing any subsequent datacap until we can confirm that deals made with the initial datacap indeed is made available for retrieval.
thanks to reply. we'll communicate to all SPs to let the data be available for retrieval before creating deal.
Hey @kevzak , sorry to late information. We managed to deploy a public frontend for our original data directory, 😄. You can access through http://data.testnet.web3eye.io:21213/buckets/car/browse (sorry we don't setup https currently, we'll do that later then you can access through link http://data.testnet.web3eye.io/buckets/car/browse without port). I'll email access user and password to you privately.
As you can see in the following screenshot, we store target car file in car bucket, and original tar ball in tar bucket. Original token image which is parsed from blockchain is in token-image bucket.
Currently we only support index data from ethereum and solana network. In future we'll support more blockchain.
Ok, thanks for sharing @kikakkz - we will review after you use the initial 50TiBs DataCap to see deals and data stored.
Your data preparation pipeline, to the extent that you have described it, will not be effective in making this content available.
The process from the shared files is that you are taking underlying resources of this data set, running tar gzip
over them, and then wrapping the resulting .tgz file in a car wrapper for storage. Since you are compressing the individual assets, they will not be parsed or made available to the network, and it is unclear how storage of deals in this format will lead to the content being preserved, discoverable, or available to IPFS or filecoin users.
The tgz step is providing minimal compression of the image data (in the example provided, the source assets fit in the same deal size when directly converted into a car file as when the tgz compression step is applied first.
Your data preparation pipeline, to the extent that you have described it, will not be effective in making this content available.
The process from the shared files is that you are taking underlying resources of this data set, running
tar gzip
over them, and then wrapping the resulting .tgz file in a car wrapper for storage. Since you are compressing the individual assets, they will not be parsed or made available to the network, and it is unclear how storage of deals in this format will lead to the content being preserved, discoverable, or available to IPFS or filecoin users.The tgz step is providing minimal compression of the image data (in the example provided, the source assets fit in the same deal size when directly converted into a car file as when the tgz compression step is applied first.
this file is for our application https://testnet.web3eye.io. it's a backup storage of our cross chain content indexer. user access our application will get data from filecoin storage if the original data is missed. you describe the steps correct, 😄, they're generated with our cross chain analysis engine.
for example, if the original data is missing, we'll mark it in our application, then user can get the backup file from filecoin storage (may pay or free). our application will let user know that this is not the original data. currently we only support images, and in future we'll support other content: videos, audios, articles. so sure, it's not for IPFS or filecoin users, it's for our application users, 😄
checker:manualTrigger
No active deals found for this client.
[^1]: To manually trigger this report, add a comment with text checker:manualTrigger
checker:manualTrigger
No active deals found for this client.
[^1]: To manually trigger this report, add a comment with text checker:manualTrigger
closing as inactive for 3 months
Version
1
DataCap Applicant
kikakkz
Project ID
1
Data Owner Name
web3eye.io
Data Owner Country/Region
Hong Kong
Data Owner Industry
Web3 / Crypto
Website
https://testnet.web3eye.io
Social Media Handle
@web3_eye
Social Media Type
Slack
What is your role related to the dataset
Dataset Owner
Total amount of DataCap being requested
1
Unit for total amount of DataCap being requested
PiB
Expected size of single dataset (one copy)
512
Unit for expected size of single dataset
TiB
Number of replicas to store
4
Weekly allocation of DataCap requested
50
Unit for weekly allocation of DataCap requested
TiB
On-chain address for first allocation
f1pvixpahnuxfjp73ojo43oax5asoopobyqu4hmea
Data Type of Application
Public, Open Commercial/Enterprise
Custom multisig
Identifier
No response
Share a brief history of your project and organization
Is this project associated with other projects/ecosystem stakeholders?
No
If answered yes, what are the other projects/ecosystem stakeholders
No response
Describe the data being stored onto Filecoin
Where was the data currently stored in this dataset sourced from
My Own Storage Infra
If you answered "Other" in the previous question, enter the details here
No response
If you are a data preparer. What is your location (Country/Region)
Hong Kong
If you are a data preparer, how will the data be prepared? Please include tooling used and technical details?
If you are not preparing the data, who will prepare the data? (Provide name and business)
No response
Has this dataset been stored on the Filecoin network before? If so, please explain and make the case why you would like to store this dataset again to the network. Provide details on preparation and/or SP distribution.
Please share a sample of the data
Confirm that this is a public dataset that can be retrieved by anyone on the Network
If you chose not to confirm, what was the reason
No response
What is the expected retrieval frequency for this data
Daily
For how long do you plan to keep this dataset stored on Filecoin
More than 3 years
In which geographies do you plan on making storage deals
Asia other than Greater China
How will you be distributing your data to storage providers
HTTP or FTP server, Shipping hard drives
How did you find your storage providers
Partners
If you answered "Others" in the previous question, what is the tool or platform you used
No response
Please list the provider IDs and location of the storage providers you will be working with.
How do you plan to make deals to your storage providers
Boost client
If you answered "Others/custom tool" in the previous question, enter the details here
No response
Can you confirm that you will follow the Fil+ guideline
Yes