Open mariagq777 opened 1 year ago
it is very much working yes. provide a test case / more info and i'll try to help
how are you tracking the 31mb? sounds like you are looking at sparse files tbh
First of all, thanks for the response. Truth be told, we're very excited about the project here on the team. We haven’t hesitated to start testing and building. We love it.
I don’t want to clutter my comment with unnecessary code. I'm copying and pasting examples provided in the quick start of the documentation: https://docs.holepunch.to/quick-start#hyperbee-sharing-append-only-databases
The example consists of a writer file, a reader file, and a dictionary that gets ingested into the writer.
When both are started, only the writer folder weighs 31 mb.
But just one or two queries to the dictionary from the reader causes the 31mb to be replicated in the aforementioned reader's folder.
This image is after making only 2 queries:
Look, I just changed computers. Now Imac Intel, node 18
I just run both files, everything looks fine, only one folder is 31mb, the other is practically empty.
So, in the reader, I ask for the word "actual"
This should cause the necessary data to respond to the query to come from the writer. Now, look, I just made the query...
Next I'm going to see how much the folder and file occupies.
and from what I see he passed the entire dictionary from one place to the other.
Thanks for your time
Pablo
Yea thats showing the sparse size. use du
or something like that to check in the terminal (the files are mostly holes which take no space)
Only one query and...
I'm very sorry, I cat the data file and there I could see that it is practically empty.
I wonder if with huge source files you can make the destination computer believe that the disk is full.
Thank you for all your willingness, the project is amazing
I am replicating the example from the Hyperbee documentation. I'm referring to this one: https://docs.holepunch.to/quick-start#hyperbee-sharing-append-only-databases
The example encourages me to make a couple of queries to see how, in the reading node, only a smaller amount of data has been transferred. It refers to the "sparse" type of query where only the necessary blocks to answer the query will be downloaded.
The problem I'm facing is that it only takes 1 or 2 queries for the entire dictionary (approximately 31 mb) to be transferred.
I then thought the problem was line number 11 (in the reader): swarm.on('connection', conn => store.replicate(conn))
Fearing I was forcing synchronization, I removed it, but I'm again facing the same behavior.
Am I doing something wrong? Is this how it works?
Thank you and sorry for the inconvenience
Node 18 Apple M1