argosopentech / argos-translate

Open-source offline translation library written in Python
https://www.argosopentech.com
MIT License
3.68k stars 270 forks source link

Better model distribution #7

Closed PJ-Finlay closed 3 years ago

PJ-Finlay commented 3 years ago

Currently models are distributed by Google Drive (not ideal) and a slow BitTorrent, so there's lots of room for improvement:

The plan was to make a separate repo for storing model distribution information so let me know if your interested.

PJ-Finlay commented 3 years ago

Related to this once better distribution is in place we'll want to add the ability to manage model downloads from the GUI.

pierotofy commented 3 years ago

GitHub has (so far, crossing fingers) been pretty good at hosting binary data (up to ~2GB in size per file) without limits, although the party might end sometimes in the future.

Otherwise a central server with CDN works pretty well too. For cheap, there's https://www.digitalocean.com/products/spaces/ (although careful on the outbound transfer fees!) or https://wasabi.com/ (no egress fees)

PJ-Finlay commented 3 years ago

A CDN is probably the best long term solution (in addition to providing torrents) and Google Drive is definitely not ideal. Last weekend when there was a lot of traffic from Hacker News it seems that some people weren't able to download models. I'm tempted to link directly to the githubusercontent url in the LibreTranslate Models repo but that may increase the chance that we can't continue to use GitHub.

https://stackoverflow.com/questions/38768454/repository-size-limits-for-github-com

pierotofy commented 3 years ago

I'm not too worried; the discussion on S.O. is pre-Microsoft acquisition. If you want to link it, go ahead.

davidak commented 3 years ago

There is also IPFS.

PJ-Finlay commented 3 years ago

@davidak https://www.reddit.com/r/ipfs/comments/l43u9u/seeking_open_source_contribution_to_distribute/

Starcommander commented 3 years ago

I wonder if the raw data is also stored anywhere. Or do we only have the compiled model files?

PJ-Finlay commented 3 years ago

No, the training data is very large. I don't even store it myself I just re-download from Opus every time I train a new model. If you're interested in the data go to Opus. There's also a generate_wiktionary_data.py script in the training scripts.

PJ-Finlay commented 3 years ago

We're now sharing one model pair (English-Chinese) via IPFS (available on package index) as a test. A volunteer is currently pinning for us and we can expand if people find this useful.

user813 commented 3 years ago

It might be useful to consider using GitHub Releases to distribute the models, as there are no bandwidth limitations and since CloudFront is used to deliver the assets.