peeringdb / peeringdb

Server code for https://www.peeringdb.com/
BSD 2-Clause "Simplified" License
341 stars 110 forks source link

Increase IPv4/6 prefix limits #671

Closed arnoldnipper closed 1 year ago

arnoldnipper commented 4 years ago

Increasing limits to

could make sense, when looking at Potaroo. Ticket refers to #101. To increase the limits was suggested by Gavin Tweedie from Megaport.

ghankins commented 4 years ago

Most routers support a 32 or 64-bit number in their route policies. Why not make it 0..4294967295? I don't think PeeringDB should set a limit on what we think is a reasonable value. Let the user decide that for themselves.

Plus we'll always have to monitor the internet routing table size to adjust the limit.

gavintweedie commented 4 years ago

I don't mind there being some smarts to confirm typo's etc, especially from newbie networks. Clearly nobody (right now) needs 1Million IPv6 routes as a valid limit. Perhaps if we do have a limit it could be automated and tied to the size of the routing table at that time?

6939 is announcing north of 30k IPv6 these days, and as they participate on the route servers once you add in a number of other peers route server tables are starting to grow beyond 40k.

Gavin

arnoldnipper commented 4 years ago

@peeringdb/pc any ideas?

mcmanuss8 commented 4 years ago

I agree with Greg in principle but in reality, letting people put in any number with no safety check usually results in bad data. I think having an upper bound would be good, but doing it statically would be annoying.

Two things to consider: 1) The limits should be controlled by config instead of code so we can easily alter the limits without doing a peeringdb release. Give a simple CRUD or something to admin committee and let them have control over it. 2) In addition to 1, we could write something that finds a reasonable max # of prefixes by looking at potaroo or route-views and adds 10% or so on some cadence (monthly?) to automagically up the limit as the size of the global routing table increases.

ghankins commented 4 years ago

I think the proposed solution adds a lot of needless complexity for little gain.

But there is a more fundamental issue here in that PeeringDB should validate data to ensure it correctly matches the data type (IP addresses, ASNs (minus reserved ASNs), integer ranges, etc.), but we should not dictate what we think are valid values within the type.

This may not be the best example but I'm worried about setting a precedent where PeeringDB dictates how operators run their networks. I'm kind of surprised there are even limits that we need to increase.

shane-kerr commented 4 years ago

While I think that I understand @mcmanuss8's point about validating data as much as reasonable, I kind of agree with @ghankins. We're not going to catch errors where someone types 81000 instead of 18000, so this is only a mild check. Given that the "reasonable" limit requires care & feeding (as @mcmanuss8 presents), I don't think it is worth it.

arnoldnipper commented 4 years ago

@ghankins there is difference between dictating what we think are valid values and defining ranges for reasonable values. Values have to make operational sense and hence putting in ranges is a safeguard.

Having a config file as @mcmanuss8 proposes is an excellent idea. These data may be controlled by either @peeringdb/ac or @peeringdb/oc. Which committee makes more sense.

mcmanuss8 commented 4 years ago

Another approach here would be to shift from a hard error to a soft error. If you set the value out of the actual range (32-bit or 64-bit), we hard error. If you set it out of the configured range, we soft error: "The prefix limit of $your_input seems very high. Most are less than $what_we_have_in_config. Are you sure it is correct?"

ccaputo commented 4 years ago

There is already a config file with the settings 500000/50000. The limits in https://github.com/peeringdb/peeringdb/blob/master/config/facsimile/peeringdb.yaml are unchanged since when that file was committed on Nov 8, 2018. Updating that periodically to the Potaroo counts rounded up to nearest 100k/10k (v4/v6) seems reasonable. Ie., for now 900k/90k, or even 1M/100k would potentially be good for years.

Out of curiosity, and not because I suggest to use IRR data for this, but here are the prefix counts of the as-set's of some backbones, from a route server operator perspective, along with their current setting in PeeringDB: (left is as-set count, right is info_prefixes{4,6})

125,091 1299.v4 - 426,000
 34,180 1299.v6 - 40,000

123,690 2914.v4 - 350,000
 31,110 2914.v6 - 35,000

122,334 3257.v4 - 350,000
 30,719 3257.v6 - 35,000

113,330 3491.v4 - 500,000
 27,220 3491.v6 - 50,000

123,033 6453.v4 - 250,000
 31,109 6453.v6 - 21,000

 82,205 6461.v4 - 115,000
 14,796 6461.v6 - 8,000

 77,161 6939.v4 - 169,000
 15,628 6939.v6 - 49,000

Another data point, at present 5 networks in PeeringDB set their IPv4 prefix count to the max of 500k, while 68 networks in PeeringDB set their IPv6 prefix count to the max of 50k. I can't see what Gavin wrote in the ticket, but I am curious why there is even a need to go above the current limits.

arnoldnipper commented 4 years ago

I can't see what Gavin wrote in the ticket, but I am curious why there is even a need to go above the current limits.

If you have AS6939 connected to your IX they already announce 30+k IPv6 routes. HE have set their value to 49k. So, calculating with 40k prefixes and adding headroom (+50%) that would sum up to 60k.

Increasing to 10^6/10^5 for IPv4/IPv6 seems reasonable IMHO

ccaputo commented 4 years ago

I can't see what Gavin wrote in the ticket, but I am curious why there is even a need to go above the current limits.

If you have AS6939 connected to your IX they already announce 30+k IPv6 routes. HE have set their value to 49k. So, calculating with 40k prefixes and adding headroom (+50%) that would sum up to 60k.

Very good point. Just realized my IRR as-set stats above are for aggregated prefixes, and don't account for the announcement of more specifics. (ex. HE agg count of 15,628 but specifics count of 30+k)

Increasing to 10^6/10^5 for IPv4/IPv6 seems reasonable IMHO

Agreed.

shane-kerr commented 4 years ago

Can we set a reminder for 3 or so years from now to raise this value again? :joy:

arnoldnipper commented 4 years ago

@peeringdb/pc could we please vote that @peeringdb/oc sets limits to

in the config file

arnoldnipper commented 4 years ago

+1

grizz commented 4 years ago

+1 -- please don't PR that config, it's gone in a few days with #548

shane-kerr commented 4 years ago

+1

arnoldnipper commented 3 years ago

@peeringdb/oc Mike Leber from HE suggests setting

funkestefan commented 3 years ago

IPv4 500k might be to low for t1. IPv6 currently 111k, 250k should be sufficient for now

job commented 3 years ago

Putting the maximum at 70% of the current routing table sizes is probably safe for all involved.

The peeringdb limit should accommodate the largest networks but not be higher than (or close to) the actual DFZ size.

My 2 cent

funkestefan commented 3 years ago

70% would be 600k for IPv4, 80k for IPv6. Both are reasonable numbers, but need a manual checking every ~3 months unless we have access to a router to automate all the things.

arnoldnipper commented 3 years ago

Mike Leber was trying to key in 129k for their ASN. However, 129k >> 80k. So, @job's rule needs improvement.

job commented 3 years ago

Mike is asking for too much (imho, from a globel DFZ perspectve)

The point of this feature is to prevent full table leaks. HE does not have 129K prefixes in their customer cone. (I acknowledge HE is one of the worlds largest ipv6 networks, but accommodating the experts at HE might have in unintentional consequences in other parts of the ecosystem.)

mieli$ bgpctl show rib inet | wc -l
  829067
mieli$ bgpctl show rib inet6 | wc -l
  110765

I can help automate an alert 3 or 6 month review.

Im also fine with a mechanism where we have a few manual exceptions. HE is somewhat exceptional

martinhannigan commented 3 years ago

Learning, so bear with me.

Are networks really using PDB to set prefix limits? How many? Why have this “feature” and why would PDB unilaterally decide the limit if (example) Mike Leber says 129k? Why is the user wrong and do we know why he tried to set it to 129k?

Thanks

On Thu, Apr 15, 2021 at 12:22 Job Snijders @.***> wrote:

Mike is asking for too much (imho, from a globel DFZ perspectve)

The point of this feature is to prevent full table leaks. HE does not have 129K prefixes in their customer cone. (I acknowledge HE is one of the worlds largest ipv6 networks, but accommodating the experts at HE might have in unintentional consequences in other parts of the ecosystem.)

mieli$ bgpctl show rib inet | wc -l 829067 mieli$ bgpctl show rib inet6 | wc -l 110765

I can help automate an alert 3 or 6 month review.

Im also fine with a mechanism where we have a few manual exceptions. HE is somewhat exceptional

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/peeringdb/peeringdb/issues/671#issuecomment-820559788, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFA2YQXS7UTB2KWB3BEBQOLTI4HEBANCNFSM4LT3OYRA .

ccaputo commented 3 years ago

Are networks really using PDB to set prefix limits?

Not a network, but an IXP (the SIX) is using PeeringDB per-network prefix max count data to inform route server prefix limits. At https://www.seattleix.net/route-servers we state: Max prefix for IPv4 and IPv6 comes from your ASN's PeeringDB record.

funkestefan commented 3 years ago

Learning, so bear with me. Are networks really using PDB to set prefix limits? How many? Why have this “feature” and why would PDB unilaterally decide the limit if (example) Mike Leber says 129k?

Yes there are several, e.g. we do: https://peering.anexia.com/ It is a security measure, see https://tools.ietf.org/html/bcp194 or https://www.manrs.org/isps/guide/filtering/

Networks can easily set their max prefix limits by their own. We can automate all the things and just fetch the data from pdb to produce the config without human interaction. You will find more and more networks that require a well kept pdb record before you can peer with them.

129k is wrong because its way over the current max seen prefixes in the DFZ. (110k) Maybe they have a different use case for pdbs prefix limit fields.

grizz commented 3 years ago

70% seems reasonable to me, it would be fairly trivial to update the number every production release, so about once a month.

martinhannigan commented 3 years ago

Understood and it is trivial. And I guess if you don't like it you can also just not use it.

On Fri, Apr 16, 2021 at 1:03 AM Stefan Funke @.***> wrote:

Learning, so bear with me. Are networks really using PDB to set prefix limits? How many? Why have this “feature” and why would PDB unilaterally decide the limit if (example) Mike Leber says 129k?

Yes there are several, e.g. we do: https://peering.anexia.com/ It is a security measure, see https://tools.ietf.org/html/bcp194 or https://www.manrs.org/isps/guide/filtering/

Networks can easily set their max prefix limits by their own. We can automate all the things and just fetch the data from pdb to produce the config without human interaction. You will find more and more networks that require a well kept pdb record before you can peer with them.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/peeringdb/peeringdb/issues/671#issuecomment-820907431, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFA2YQX3M6YKFHBVXEWODZTTI7AJDANCNFSM4LT3OYRA .

funkestefan commented 2 years ago

People still have issues with the max-prefix-limit field. Any progress on this issue?

job commented 2 years ago

@funkestefan what are the issues?

funkestefan commented 2 years ago

Verizon is using pdb to auto-configure values, but can't set a real world max-prefix-limit for HE. (HE now > 100k, our max value)

job commented 2 years ago

There might be a misunderstanding between HE and Verzion, unrelated to PeeringDB.

I see 48,850 routes via HE on a BGP session in Amsterdam, and doubly confirmed on an IX Route Server in Canada.

If HE is sending 100K+ IPv6 routes to Verizon, Verizon is configured as a 'full table customer' and not as a 'peer'.

The PeeringDB "IPv6 Prefixes" field is meant to indicate the number of routes in the Customer Cone, not the total number of routes in the BGP Default-Free Zone.

arnoldnipper commented 1 year ago

@peeringdb/oc, as of 28-12-2022 2247 I see on Potaroo

Given that we are already at 1M for IPv4 and 100k for IPv6, I suggest to raise the values of info_prefixes4 resp info_prefixes6 to

@peeringdb/pc and @peeringdb/ac: comments?

peterhelmenstine commented 1 year ago

+1

DarwinCosta commented 1 year ago

+1

grizz commented 1 year ago

This needs to be a new issue -- created at https://github.com/peeringdb/peeringdb/issues/1298

Another issue for auto incrementing it on deploy would be nice. :)