anacrolix / torrent

Full-featured BitTorrent client package and utilities
Mozilla Public License 2.0
5.39k stars 615 forks source link

More than 3 times slow download speed over sqllite storage #890

Open mahbubmaruf178 opened 5 months ago

mahbubmaruf178 commented 5 months ago

storage, err := sqliteStorage.NewDirectStorage(sqliteStorage.NewDirectStorageOpts{}) defer storage.Close() config := torrent.NewDefaultClientConfig() config.DefaultStorage = storage its my storage setting for getting only torrent reader file without saving into disk(like streaming)

here is with sql vs default storage speed sql storage - withsqltr

default storage sql

so, How I can speed Up streaming without saving data into disk ?

anacrolix commented 5 months ago

The defaults for the sqlite storage aren't optimal. You can tune those (and get 10-20x improvement), but additionally you can set sqlite storage to be entirely in memory. I'll provide the details soon.

mahbubmaruf178 commented 5 months ago

if torrent file size like 5gb is it store whole into memory?🤔 if store data (~20mb chunk) into disk then I can get the best speed ?

anacrolix commented 5 months ago

Sorry for the delay on this: Here are the defaults I use in https://www.coveapp.info/ for a squirrel.Cache, the implementation behind the direct sqlite storage:

cacheOpts.SetAutoVacuum = g.Some("full")
    cacheOpts.SetJournalMode = "wal"
    cacheOpts.SetSynchronous = 0
    cacheOpts.Path = "squirrel2.db"
    cacheOpts.Capacity = 9 << 30
    cacheOpts.MmapSizeOk = true
    cacheOpts.MmapSize = 64 << 20
    cacheOpts.CacheSize = g.Some[int64](-32 << 20)
    cacheOpts.SetLockingMode = "normal"
    cacheOpts.JournalSizeLimit.Set(1 << 30)
    cacheOpts.MaxPageCount.Set(15 << 30 >> 12)

This essentially says: Allow concurrent reads and a single writer (with decreased transaction overhead), don't bother to flush to disk on writes (it's a cache), store all the data in a file called squirrel2.db, limit the file to 9 GiB. Memory map the first 64 MiB of data (I think). Keep 32 MiB of the database in memory at most. Allow regular transactions. Don't let the journal get over 1 GiB in size. If the file gets over 15 GiB, return an error.

Many of these settings should be the default. Take a look at squirrel.NewCacheOpts, there's plenty of stuff in there, including exclusive mode, and memory mode which will give you even better performance.

mahbubmaruf178 commented 5 months ago

Yes, download speeds are improved, but not suficient . I noticed that Blot Storage has fast download speed. Is there bolddb has size limit, or is it possible to set limit?

anacrolix commented 5 months ago

No, the bolt DB implementation provided in anacrolix/torrent doesn't include size limits, or any cache eviction.

anacrolix commented 5 months ago

Did you want to try with https://github.com/anacrolix/possum?

The Go interface is here: https://pkg.go.dev/github.com/anacrolix/possum/go.

You can use the resource.Provider interface in https://pkg.go.dev/github.com/anacrolix/possum/go@v0.0.0-20240117104152-4c6d4e2d6204/resource with http://localhost:6060/github.com/anacrolix/torrent@v0.0.0/storage#NewResourcePieces. It does require that you compile a Rust library.

In my testing it's not currently faster than using squirrel, but it is heading that way.

I'm not sure in general why you're not happy with the other storage backends, I've not seen them be bottlenecks before, so if you have more information you could share, please do (torrent/magnet link for example).

mahbubmaruf178 commented 5 months ago

I'm working on a project that user can upload torrent file into their cloud storage like one drive,pcloud,storj,wasabi.. etc. I made an api that that require file reader to upload file . In my case I have got good speed except sqlite and filecache(maybe) .

anacrolix commented 5 months ago

Can you just pick one storage backend and go with that? Any reason you need the sqlite or filecache ones?

mahbubmaruf178 commented 5 months ago

because cheap vps have low storage, so I'm trying to upload it like streaming without saving the file to disk.