openstreetmap / operations

OSMF Operations Working Group issue tracking
https://operations.osmfoundation.org/
98 stars 13 forks source link

Create .torrent files as part of planet creation #451

Closed Firefishy closed 3 years ago

Firefishy commented 4 years ago

Currently we publish all planet files via https on https://planet.openstreetmap.org/ We'd like to start publishing bittorrent .torrent files for the large files eg: https://planet.openstreetmap.org/planet/2020/planet-200720.osm.bz2 ((92GB)

Requirements:

Goal:

mmd-osm commented 4 years ago

// cc: @cquest who's already running a torrent service on http://osm.cquest.org/torrents/ and might share some ideas re. requirements.

hbogner commented 4 years ago

Planet was torrented long time ago also: https://github.com/mnalis/osm-torrent cc @mnalis

hbogner commented 4 years ago

@mnalis you said something about one liner and rss, could you please elaborate more.

mnalis commented 4 years ago

As the person who was running https://github.com/mnalis/osm-torrent, I'm willing to help in any way needed. Let me know.

As for the comments on requirements:

Sure. In it's most basic, it was just fire-and-forget apt-get install mktorrent and adding (somewhat longish oneliner, but still) oneliner in script which is called after .osm.bz2 file is generated. No further maintenance was needed. Mine was for example basically:

# this is our full featured torrent file: redundant trackers, tcp+udp, ipv4+ipv6, webseed
mktorrent -l $CHUNKSIZE \
  -c "See http://osm-torrent.torres.voyager.hr/ -- $LICENSE" \
  -a http://ipv4.tracker.osm-torrent.torres.voyager.hr/announce \
  -a http://ipv6.tracker.osm-torrent.torres.voyager.hr/announce \
  -a udp://tracker.ipv6tracker.org:80/announce,http://tracker.ipv6tracker.org:80/announce \
  -a udp://tracker.publicbt.com:80/announce,http://tracker.publicbt.com:80/announce \
  -a udp://tracker.ccc.de:80/announce,http://tracker.ccc.de/announce \
  -a udp://tracker.openbittorrent.com:80/announce \
  -w $URL_PLANET2 -w $URL_PLANET $FILE_PLANET -o ${FILE_TORRENT}.tmp \
  && mv -f ${FILE_TORRENT}.tmp ${FILE_TORRENT} \
  && ln -sf ${FILE_TORRENT} ${FILE_TORRENT_LATEST}

In addition to that, it would be nice to have RSS/Atom of the changes (I was providing RSS too with simple shell script) as at least few GUI clients offer to follow/autoshare files from RSS (and I know at least few people would want to help share the load that way, myself included). Of course, dedicated non-GUI sharers could just put in cron oneliner like wget https://planet.openstreetmap.org/latest.torrent -O /incoming/rtorrent/, but still, more community support could be had with RSS.

Yes, webseed is working in example above with -w options.

Yes, it's configurable. I was using at the time:

CHUNKSIZE=22        # 2^20=1MB, 2^22=4MB, etc. mktorrent 1.0 default ( 2^18=256kB) is too small for our ~15GB files

but one could choose any value s/he likes.

Any tracker which is public is by definition susceptible to be used by individuals/groups/actions one does not approve of. Listed above are few example which were fine by me back then, but that probably changed.

Only way to be absolutely sure is to provide your own. Note that most torrent software do not actually require that you use a tracker at all (using DHT / PEX / LPD [instead / in addition] to find peers) so you could probably go trackerless (at least in start), but having a tracker or two in there is probably better idea.

I was providing at the time ipv4/ipv6.tracker.osm-torrent.torres.voyager.hr, which were running simple GPL PeerTracker PHP software using sqlite on standard shared Apache hosting using suexec+forking for each script invocation - for simplicity (and IPv6 tweaking which was lacking at the time, and another subject I was interested in). It was sure reputable and worked fine at the time, but if I had high load in mind, I'd probably used dedicated one that run as daemon.

Sure. Worked just fine at least with uTorrent on Windows, mainstream / rtorrent / transmission / Vuze on Linux.

I'm glad bittorrent idea is finally getting traction. Let's get on with it!

hbogner commented 4 years ago

I had no idea it was that simple !!! Just used you one liner to create a torrent for some iso files I'm mirroring, and it started webseed download from all the mirrors, with no seeders to start with. Now i need to to build a planet seed machine :)

cquest commented 4 years ago

The only problem so far, is that in order to generate the torrent file, you have to download the planet file entirely. That's why generating an official torrent file on planet.openstreetmap.org will save time and bandwidth for everybody.

As soon as this torrent is available, clients can start downloading parts from the web seed at planet.osm.org and share them with other torrent clients.

grischard commented 4 years ago

@cquest's scripts are available at http://osm.cquest.org/torrents/.pbf/. They handle web seeds and rss. Thank you Christian!

mmd-osm commented 4 years ago

What would be the absolute minimum version of this? Can we skip rss.xml, meta4, .m4j and running our own tracker, and leave that for later? What about history pbf and osm.bz2? Are they in high demand or can we skip them for the time being?

Do we care about osm.pbf files that have been deleted after a while? Do we want to remove the .torrent file as well in that case?

mnalis commented 4 years ago

@mmd-osm

verdy-p commented 4 years ago

@cquest: your initial list of trackers is not correct, and it is a bit too small to make it resistant. Here is the list I use:

udp://tracker.coppersurfer.tk:6969/announce

udp://tracker.opentrackr.org:1337/announce

udp://tracker.torrent.eu.org:451/announce

udp://tracker.leechers-paradise.org:6969/announce

udp://tracker-udp.gbitt.info:80/announce

http://tracker.gbitt.info/announce

http://tracker.cquest.org:6969/announce

http://tracker.computel.fr:80/announce

The last two do not work most of the time (including your own local track). As well you used a tracker with a non-routable domain name (in ".local") which should never be there (removed in the list above). The UDP trackers in the list are almong the most reliable for now, and they are generally faster than HTTP (or worse HTTPS, but HTTPS is great to protect the network and get a secure restart; may be we should have an HTTPS tracker from OSM.org itself directly on its planet server, even if it has a small capacity, as long as it also hosts UDP, and if it annoucnes itself as well on a few other "open" trackers; and along with Webtracker pointing to the existing HTTP(S) server on OSM.org, it would allow faster propagation, as for now you still need to perform a full download once a week on your server, before you can announce it on your small tracker which is not available for long enough : you really need other external trackers capable of supporting more clients, even if now most torrent clients can also use PEX and the DHT to discover many more clients as they've started to download a few kilobytes).

Note that downloading about 128MB of data per file is always fast. But at ~127.5MB (128MB is for the extra overhead on the filesystem for allocating free space on the volume and writing/updating the directory entries), most downloads will stall for about half an hour or more, before resuming at must lower speed. During that initial time (half an hour or more), you cannot even shutdown your PC on Windows (this is a problem inside Libtorrent which affects both Windows and Linux, and used by many BitTorrent clients) because of the way it manages the file storage allocation and limitation inside OS'es for pending I/O to the disk. This time of ~1/2 hour may be reduced only if you download planet files entirely on an SSD, it may be much larger if you download to a network mounted drive (NAS) or a RAID (with parity or redundancy)... This problem is not at all a problem of the CPU performance or a memory barrier: the first 128MB are loaded very fast in just one second, then nothing for a long time where you'll notice a huge disk activity inside the OS, but after that you'll notice that the download restarts and resume at decent speed after just a few minutes (generally at about 700 megabits/second if there's a dozen of peers available), still limited to the disk write bandwidth (if downloading from a Gigabit Fiber internet access, or the Internet download bandwidth only with DSL access).

For some planet files I could then reshare them with a good ratio (for some planets shared since last January I have a ratio reaching about 1.5, but for the most recent planet file reshared fast, the ratio grows rapidly to about 0.25 in just one day, with about 10-20 peers connected constantly.

So even with a modest amount of peers on torrents, we save Terabytes on the OSM.org's planet server! This could be much more if OSM.org implemented this tracker directly on its server to greatly accelerate the distribution (this could also be used to download some small RSYNC datafile and redistribute the minute diffs listed in it also as torrents instead of RSYNC; classic mirrors are not fast enough to significantly reduce the workload on OSM.org's servers)

verdy-p commented 4 years ago

Also I can provide some statistics about the current growth of the planet file (in .pbf format): It grows at a significant but almost constant speed each week:

I can predict that today's planet file (in .osm.pbf format) for 2020-09-07 will be ~52.89 GiB, given that each week it grows by about 0.11 GB. There was no effect on this growth caused by the COVID-19 pandemic even if more people were at home (but not only for working on OSM, and less people going outdoor, many have working on "cleanup tasks" or just refined the geometry of existing objects around them only to match the most recent imagery; bots are still active to remove unnecessary duplicates and new data imports from public agencies are coming at slower rates, as many of them have been delayed).


The prediction was correct and is still valid today:

verdy-p commented 4 years ago

Also not that sharing on the same host not only a large OSM planet dump, but also a popular Linux distrib ISO significantly helps maintaining the DHT connected (the DHT is used across multiple shares).

I host for example the ISO for Linux Mint 20 (64-bit: 3 versions for its choice of desktop environment: Xfce, Mate and Cinnamon the most popular): these ISO are not very large (about 1.8 GiB) and they constantly have a large number of connected peers > 1000), and they are updated with one or two versions each year. Even the Linux Mint 19 is still popular (version 20 no longer has a 32-bit version, so the 32-bit version is still widely requested in version 19).

This allows faster boot time for the DHT and improves the stability for faster discovery of clients interested in OSM data, with various network types (IPv4 or IPv6 ; mobile, fiber, DSL and dedicated servers on backbones/datacenters) and lot of countries worldwide.

hbogner commented 4 years ago

Just a tought If you create planet.torrent on ironbelly after planet.pbf is created could all the mirrors get it by torrent too and not wait for rsync of the whole file?

verdy-p commented 4 years ago

I do agree that syncing mirrors with torrents would potentially allow a faster start of the swarm and better stability without pressing the initial server with excessive loads that mirrors (and all individual downloaders) will attempt to do for hours or days.

For now getting a new weekly dump or even a daily dump is just a "dream". And this causes significant problems, such as the lack of map renderers (remember that Wikimedia has already announced it will severely limit access to its own tile server).

We need a solution to better scale the distribution and largely increase our own "CDN" which is very fragile, and very slow to update to reflect changes (or corrections after some small errors, like most of England turned to a giant beach recently because a tiny change in a small beach, or because incomplete changesets that were broken prematurely for variable technical reasons).

How can we offer stability to the network? We need to demand LESS to mirrors and servers, and allow more people to contribute a share of their computing or networking power. Torrents are a good solution towards this goal (this is not the only solution of course). But if servers of the Foundation are less stressed, that power will have better use for more powerful tools not doing the same things massively and repeatedly (and having difficulties to synchronize each other in reasonable time).

That's why OSM still has to play in the same garden as commercial solutions (that have developed a much more massive CDN allowing their contents to be updated and delivered faster and more reliably).

I only see Rsync useful only for internal synchronization of servers managed with a unified administratioàn and using the same tools and a strong interaction between their local administrators: access to the Rsync could then be restricted to just a few internal mirrors. But I am convinced that a P2P protocol (torrent or Kademlia- based) would scale better than the current solution based on a single central administration: we could use a more heterogenous array of hosts more easily deployable worldwide, and we should be able to use the growing networking capabilities of individual fiber-based internet accesses without necessarily having to develop our own sets of unstable peerings poorly administered and not balanced at all.

If you want a minimum balancing of workload, torrents fit quite well and they require almost no development. The protocol is already there. We just lack a few maintenance scripts in the OSM servers to start seeding the dumps in a more cooperative way (and without requiring a strong and fast commitment of each participant).

Implementing the root tracker directly on the OSM servers is not something complicate to do (after all Christian Quest did that with his own HTTP downloads for servers in OSM France) and opensourced that solution. But he also complained that it is difficult to get the first whole download.

As well we need a few more open trackers than just the two ones listed by Christian (these two are not very stable, and not even Christian's tracker which regularly goes offline or cannot accept many requests): generally the webseeds located in Germany make an additional link needed, but it also has limitations: downloading by torrent a 50GB dump still requires 5 to 12 hours, too much for most users that want to be able to use their PC just for a couple of hours, or be able to reboot it (for installing system updates or their own software) without waiting for the same time.

I've see several occasions where these torrents, started to download one week, never finished completely before the next week when a new dump was already available: with this new version, the number of seeders dramatically decreased instantly. If you do not observe that, you may not be able to get the next download completely after you finish the first one (and you cannot perform them in parallel: most seeders limit the total bandwidth for all the files you attempt to download in parallel.

Clearly we have an unsolved scalability problem. And the cause is located precisely at OSM's servers not using the best options that are already available today, but still not used.

mnalis commented 4 years ago

So, is there anything more we can do to help implement planet torrents, and in a way that minimizes load on sysadmin team and other overloaded OSM people? Write a pull request? What would be appreciated? I do get the irony that even asking such questions is generating more load

verdy-p commented 4 years ago

Also the current seed made by Christian should include comments fields for describing the file. For the latest dump (which was late) I started the seed with these info in the comment field:

File name:  planet-200907.osm.pbf
Description:    OpenStreetMap planet database dump
Date:   2020-09-11 04:25
File size:  53.05 GiB
MD5:    f3fe22ed16296f8a8f7abf24916702f6

Copyright:  OpenStreetMap contributors
Copyright URL:  https://www.openstreetmap.org/copyright

Licence:    Open Data Commons Open Database License (ODbL)
Licence URL:    https://opendatacommons.org/licenses/odbl/
Granted by: OpenStreetMap Foundation (OSMF)

OpenStreetMap® is a registered trademark of the OpenStreetMap Foundation, and is used with their permission. See: https://wiki.osmfoundation.org/wiki/Trademark_Policy

Such info is useful to avoid candidate public trackers to block the torrents (identified only by its numeric signature and the initial seeder) and also allow ISP and data agencies to block the torrents and all its trackers. It is also needed to respect the OSM licence: every downloader of these file should see such licencing info, just like we require it for online maps....

Note that the MD5 comes from the content of the .osm.pbf.md5 file. It is not the same hash used for identifying torrents (which are computed as an SHA-1 based Merkle Tree depending on the fragment size, 4MB here for 13547 fragments including the last partial fragment).

You could as well publish multiple hashes instead of just a single ".md5" hash file using a ".hash" file containing all these digests, or as a small ".md" file containing the description as shown above (possibly enforcing the MIME-headers format or a .properties or .ini format). I suggest the ".md" format which is common now for many opensource/opendata projects; you could use an RDF/XML or RDF/JSON format as well. But the ".md" format can fit well in the public "comment" field describing torrents.

And so you'd include this kind of info for the same file (or a subset for your chosen digest algorithms):

Adler32: c2df1cf8
CRC32: c1ea5da7
CRC64: 966160999dc04901
MD5: f3fe22ed16296f8a8f7abf24916702f6
RIPEMD-128: 4359c0689b2560fa1d1151ea975547da
RIPEMD-160: 1baba953e9ae0f8343e74d25e31c9adfa8e6812d
RIPEMD-256: dfd1cedc36897f76166971b566fcccd596d416aa39b89fd48c54f5bf2f1653ce
RIPEMD-320: bccc8e1018ab7782fb7762667336e8c2e1761f968e45625ef986c7adeb046ead0fbdaa71af31950f
SHA-1: 7f867cf8d20cfff0eaf09bbe62d1f56daf636bcb
SHA-256: bdf6552241425da05bafa0074c18e9e48c95d3ffdf0473d4dcd0e90ba323887c
SHA-256 Base64: vfZVIkFCXaBbr6AHTBjp5IyV0//fBHPU3NDpC6MjiHw=
SHA-384: c5cfb3bf9af4327431d87e30e97965b49cd78e49e69ce1b2146a3363bf09ce7fcd5c751a64a34eb0f65606ba809f66e6
SHA-512: 928d0127974f3e069e4f30317ffb864130d2d8cbcdc4b61706cf953e69b51920772e599114069da866762d5c4f68ba2b42db6d78acbd096b33b1571cce53d70d
SHA3-224: 723801771f6ade8294ba966cb4f713db676e544ee0b43adbe8170aba
SHA3-256: abf63a76ca86a8e243bc31602ef3cefd9f8533a2852c2a73d314e9519ed9afc9
SHA3-384: 1a80c4db0a3225072fb54844a9a1c53caf3f6149f8c7473c84d830208fb277dd362694a08301b1cceee08ad3b3907f74
SHA3-512: 14e419d79eb98c29903446add8beb7d56737592ac54dc60afbc64602e06750fd605814a11fe8b2925a0a384a25aa1c4d1ccb938969a763366a834b6eeb6c5cfa
TTH: tjmqanvaxssp3vxofkktz34ymuuzrpocrwplx6q
Tiger: 4b523a03f1e0ec22520aefedd8de6a0d0b6ed704ea8af3c0
Whirlpool: 75eea667d2cdc84cfb6f2625e2b178e33b1b60905cf893b0918469e70e5c7d8b9d723849c4286d0b70487045e3a4a98e0cbfa568faf2da9bc517007935fce208

Note also that the date of the database dump indicates 2020-09-07 in the filename, but checking the data and preparing the .pbf to store it for downloads on the origin web server takes about 4 days: this clearly shows that such publication is not the first priority and that servers use dumps first for internal replication inside the foundation servers, before they are distributed elsewhere using a separate web servers (HTTPS) or RSYNC over the web.

Only the publication of minute diffs can use an accelerated schedule: these dumps when they are ready must always be synchronized by loading at least 4 days of minute diffs that also need to be downloaded separately.

Also I've included more URLs for web seeds (at least those that mirror dated files, not just the "latest" version:

https://ftp.spline.de/pub/openstreetmap/pbf/planet-200907.osm.pbf
https://ftp5.gwdg.de/pub/misc/openstreetmap/planet.openstreetmap.org/pbf/planet-200907.osm.pbf
https://ftp.fau.de/osm-planet/pbf/planet-200907.osm.pbf
https://osm.openarchive.site/planet-200907.osm.pbf
https://download.bbbike.org/osm/planet/planet-latest.osm.pbf
https://ftp.nluug.nl/maps/planet.openstreetmap.org/pbf/planet-latest.osm.pbf
https://free.nchc.org.tw/osm.planet/pbf/planet-200907.osm.pbf
https://ftpmirror.your.org/pub/openstreetmap/pbf/planet-200907.osm.pbf
https://planet.openstreetmap.org/pbf/planet-200907.osm.pbf
https://planet.passportcontrol.net/pbf/planet-200907.osm.pbf
verdy-p commented 4 years ago

As well I think that the OSM dumps should not just publish an MD5 hash, but a stronger secondary hash (SHA-1 is better, and even better in combination with MD5: there are good tools that compute multiple hashes in parallel, this is not CPU intensive as this is mostly I/O bound for such large files that exhaust the memory cache and that would probably be stored on a network RAID filesystem on the OSM data file servers: we know that these servers are overloaded mostly because of the limited capacity of their RAID filesystem for large storages and mounted as network folders for distribution by the webserver: it probably cannot support more than 100-150 MiB per second for the RAID, even if this network connexion is on a 1Gbps-Ethernet or 10Gbps-fiber).

ianthetechie commented 3 years ago

I would love this. I saw the tweet go out about this the other day (https://twitter.com/OSM_Tech/status/1329182062516051970) and have been testing it out. Download speeds are fantastic, and I've set up one of our servers in Germany to seed the latest planet PBF. Once https://github.com/openstreetmap/chef/pull/359 is merged and there's an RSS feed for the PBF files, I think we can formalize this. This will be a huge win in distributing the planet dumps!

Firefishy commented 3 years ago

This has now been implemented. Torrent files are being created for the old exports. Thanks to @mnalis https://github.com/openstreetmap/chef/pull/359 and https://github.com/openstreetmap/chef/pull/360

mnalis commented 3 years ago

Thanks for pulling this, @Firefishy - I'm glad I could help!

Two things, if I you can spare a minute:

Zverik commented 3 years ago

I'd like to remind you of the full history dumps. They are three times bigger than the regular pbf files, and they take a long time to download. Is it possible to extend this torrent service to them? (Should I make a new ticket for this?)

mnalis commented 3 years ago

@Zverik it should already be implemented when first run goes live (probably 201123 in two days or so). From https://github.com/openstreetmap/chef/pull/360:

# Create *.torrent files
mk_torrent "changesets" "bz2" "planet/${year}"
mk_torrent "discussions" "bz2" "planet/${year}"
mk_torrent "planet" "bz2" "planet/${year}"
mk_torrent "history" "bz2" "planet/full-history/${year}"
mk_torrent "planet" "pbf" "pbf"
mk_torrent "history" "pbf" "pbf/full-history" 
zorun commented 3 years ago

Thanks for implementing this!

It would be good to have a "latest" symlink named planet-latest.osm.pbf.torrent, just like the non-torrent files.

It allows for a quick & dirty alternative to the RSS feed: a bittorrent client could simply fetch planet-latest.osm.pbf.torrent at regular intervals. Bittorrent clients typically recognize when a new torrent file being added is the same as an existing one (at least, Transmission does).

mtmail commented 3 years ago

Isn’t there risk mirrors point -latest to different dates? The mirror I run is (of course) hours behind and points to the previous week until the new file is fully downloaded.

HolgerJeromin commented 3 years ago

A customer only fetches planet-latest.osm.pbf.torrent This has links to a file (with a specific date in file name). So IMO this is save. If your mirror is not uptodate it will be skipped by the torrent app.

mnalis commented 3 years ago

It would be good to have a "latest" symlink named planet-latest.osm.pbf.torrent, just like the non-torrent files.

It allows for a quick & dirty alternative to the RSS feed: a bittorrent client could simply fetch planet-latest.osm.pbf.torrent at regular intervals. Bittorrent clients typically recognize when a new torrent file being added is the same as an existing one (at least, Transmission does).

Do you know of such bittorrent clients which can automatically prefetch planet-latest.osm.pbf.torrent file at regular intervals and add it if changed? I don't (especially Transmission that you mention does not seem to support that). If you're on UN*Xoid (eg. non-Microsoft-Windows) system it is pretty easy (if you have some shell experience) to put a shell script in cron job which will retrieve latest torrent file (I can share mine using wget if people want), but for regular MS-Windows users RSS is probably the only reasonable way to automatically help the swarm when new planet comes out.

And since adding an RSS feed (which many of the torrent clients support out of the box) is not that much harder than creating a symlink (and was already implemented in https://github.com/openstreetmap/chef/pull/359 if it is decided to add it), I'd really prefer if RSS feeds were added instead. All it really needs is vetting from OSM operations that it is good... If something is lacking, I'm more than willing to address any concerns and fix the issue, but someone needs to tell me what is needed.

Firefishy commented 3 years ago

"latest" file symlink implemented in https://github.com/openstreetmap/chef/commit/f84ac307449ea2d9ef5ba89eb71c53ee9c9fcfc8

mnalis commented 3 years ago

@Firefishy planet-latest redirection does not seem to work correctly for .torrent files (it does for other files).

Trying to get https://planet.openstreetmap.org/planet/planet-latest.osm.bz2.torrent still redirects to previous file, although new files are published and have been downloadable for several hours (but planet-latest.osm.bz2 redirects correctly to new file). Same situation is for PBF: planet-latest.osm.pbf redirects correctly to new planet-201207.osm.pbf, but planet-latest.osm.pbf.torrent still redirects to old planet-201130.osm.pbf.torrent.

Example of failing to redirect to new torrent file (timzone CET):

% wget -O /dev/null https://planet.openstreetmap.org/planet/planet-latest.osm.bz2.torrent
--2020-12-12 01:21:31--  https://planet.openstreetmap.org/planet/planet-latest.osm.bz2.torrent
Resolving planet.openstreetmap.org (planet.openstreetmap.org)... 2001:978:2:2c::172:a, 130.117.76.10
Connecting to planet.openstreetmap.org (planet.openstreetmap.org)|2001:978:2:2c::172:a|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://planet.openstreetmap.org/planet/2020/planet-201130.osm.bz2.torrent [following]
--2020-12-12 01:21:31--  https://planet.openstreetmap.org/planet/2020/planet-201130.osm.bz2.torrent
Reusing existing connection to [planet.openstreetmap.org]:443.
HTTP request sent, awaiting response... 200 OK
Length: 498764 (487K) [application/x-bittorrent]
Saving to: '/dev/null'

At the same time, https://planet.openstreetmap.org/planet/planet-latest.osm.bz2 redirects correctly:

% wget -O /dev/null https://planet.openstreetmap.org/planet/planet-latest.osm.bz2
--2020-12-12 01:21:42--  https://planet.openstreetmap.org/planet/planet-latest.osm.bz2
Resolving planet.openstreetmap.org (planet.openstreetmap.org)... 2001:978:2:2c::172:a, 130.117.76.10
Connecting to planet.openstreetmap.org (planet.openstreetmap.org)|2001:978:2:2c::172:a|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://planet.openstreetmap.org/planet/2020/planet-201207.osm.bz2 [following]
--2020-12-12 01:21:43--  https://planet.openstreetmap.org/planet/2020/planet-201207.osm.bz2
Reusing existing connection to [planet.openstreetmap.org]:443.
HTTP request sent, awaiting response... 200 OK
Length: 104601925223 (97G) [application/x-bzip2]
mnalis commented 3 years ago

RSS feed for .torrent files, complying with requested criteria (in ticket https://github.com/openstreetmap/chef/issues/373) is available in PR https://github.com/openstreetmap/chef/pull/383, if someone could pull it.