Open gusmb opened 2 years ago
The plugin input could be in a list format accepting single IP addresses, ranges represented as string or a CIDR:
allocation_pool:
- 10.1.1.10-10.1.1.50
- 10.2.2.10-10.2.2.50
- 10.3.3.10
- 10.4.4.0/24
Whatever you think it's more convenient to work with. Maybe the second format allows people to make changes in a git repo by adding or removing a range from the pool in a new line. The compact string format is also great but could lead to more conflict issues when multiple contributors modify the same line
I don't think we will implement the range feature as described here, since is is very hard to implement in a performance oriented way. Expanding to a list that could potentially be 128-bits long (for ipv6) would explode at run time. Our current IP methods have been moved to python in a separate module which can be overridden by users. So it would be possible for you to implement your own python logic in place of ours. This has replaced the default j2 logic, but we still support the customer specified j2 templates too. I would be happy to discuss this further. Maybe better to do on a zoom call? Feel free to contact me directly at holbech@arista.com.
Another alternative that might be able to address your concerns is to be able to use the cidr as a pool, with allocation from the beginning one-by-one and registrering the chosen IPs for next run. This would make you able to allocate a smaller subnet, only covering the IP addresses needed, not wasting any space.
Will do thanks! Also I think you're right , but the Ansible ipaddr filter supports similar things that can explode at runtime. This is an utility that must be used by the user careful and reasonably and does not come without disclaimer about the runtime performance implications... Not suitable for big ranges but could be really useful for small pools where this could be needed such as NAT, DHCP etc. I think would be suitable for s user to use as a helper plugin, but not to use natively in AVD roles
I tried building it out. Including support for multiple ranges and even reversed ranges (counting down).
#!/usr/bin/python3
from ipaddress import ip_address
IPRANGES = "10.1.1.1-10.1.1.10, 10.1.1.29-10.1.1.20"
output = []
for iprange in IPRANGES.split(","):
ip_elements = [
ip_address(element.strip()) for element in iprange.strip().split("-")
]
if ip_elements[0] <= ip_elements[1]:
ip_addresses = range(int(ip_elements[0]), int(ip_elements[1]) + 1)
else:
ip_addresses = range(int(ip_elements[0]), int(ip_elements[1]) - 1, -1)
output.extend([str(ip_address(ip)) for ip in ip_addresses])
print(output)
Output:
['10.1.1.1', '10.1.1.2', '10.1.1.3', '10.1.1.4', '10.1.1.5', '10.1.1.6', '10.1.1.7', '10.1.1.8', '10.1.1.9', '10.1.1.10', '10.1.1.29', '10.1.1.28', '10.1.1.27', '10.1.1.26', '10.1.1.25', '10.1.1.24', '10.1.1.23', '10.1.1.22', '10.1.1.21', '10.1.1.20']
But it would require us to support a list of IPs as input, which I doubt we can do in the current models. I will think more about this, and see if it could fit in somewhere. Feel free to use this snip any way you like.
Thanks! Another option thinking about allocations, would be to keep the logic in Python and return a single ip address based on the same rules:
ip_ranges:
- 10.1.1.10-10.1.1.20
- 10.1.1.30-10.1.1.40
{% set ip_index = <my_logic> %}
{% allocated_ip = ip_ranges | allocate_ip(ip_index) %}
Where the filter would essentially expand those ranges internally in order and just return the index specified by the user. Having more thoughts about this, I can't think of a reason for why the user would want the whole list of IP addresses besides performing an allocation from the pool
Hi Gustavo, we have hit a need for defining usable ip ranges within a larger pool, which brought me back to this ask. I have given it some thought, and I think we could implement something like:
loopback_ipv4_pool: 192.168.0.0/24
loopback_ipv4_pool_ranges:
- 192.168.0.10-192.168.0.19
- 192.168.0-30-192.168.0.39
and then implement this in our new python module for ip addressing. I would use iterators, to avoid expanding the pools and ranges into lists (for ipv6 this explodes). It would not take much to also expose the ._ip()
method as a jinja filter.
With the above example, we would allocate IPs like: switch id 1 -> 192.168.0.11 switch id 9 -> 192.168.0.19 switch id 10 -> 192.168.0.30 switch id 20 -> Raise an error because there not enough available IPs.
Would this work for you?
Notice that above I keep the current behavior of 0-based id. We could change this in the case of having ranges, so: switch id 1 -> 192.168.0.10 switch id 9 -> 192.168.0.18 switch id 10 -> 192.168.0.19 switch id 20 -> 192.168.0.39 switch id 21 -> Raise an error because there not enough available IPs.
Hi @ClausHolbechArista , this sounds reasonably good. I think the second approach would be the right one in this case, to make use of all IP addresses in the allocation range. This solution would work for me.
Note that this would still require the total number of allocation up addresses to match the max switch ID. One thing that led me to use individual allocation for each switch instead is the fact that I can save a lot of IP space this way. For a particular VRF , not all switches are in scope, maybe I need 30-40 ips, the problem is that if one of the switches has id 230, I would still need 230 ips available. And this is difficult to resolve without breaking idempotency...
The proposed approach is valid with that caveat (still need sum of ranges to be large enough). For large fabrics might be an issue with IP space. In that case, you might also consider support for per-node loopback assignment in the data model. That works when there are only a few switches in scope for a tenant.
... And this is difficult to resolve without breaking idempotency...
Exactly my main concern.
The proposed approach is valid with that caveat (still need sum of ranges to be large enough). For large fabrics might be an issue with IP space. In that case, you might also consider support for per-node loopback assignment in the data model. That works when there are only a few switches in scope for a tenant.
Understood, we can add a per-node loopback ip setting that takes precedence.
The user could solve the idempotency issue with dynamic allocations by plugging in IPAM. In that case the IPAM keeps the records in a database so no mapping function needed. I guess it will be possible to overwrite the python function to plug in own allocation method or use another key pointing to a custom plugin (Jinja or python)?
Correct, we will expose the path for the python module used to allocate IPs, so it can be changed by the user.
This issue is stale because it has been open 90 days with no activity. The issue will be reviewed by a maintainer and may be closed
This issue is stale because it has been open 90 days with no activity. The issue will be reviewed by a maintainer and may be closed
Enhancement summary
Develop an AVD plugin capable of expanding one or multiple ranges of IP addresses into a list of IP addresses:
The plugin would accept one or multiple ranges of IP represented as a string, or even a subnet in CIDR notation, and as a result generate a list of usable IP addresses.
Which component of AVD is impacted
plugins
Use case example
It is a common use case to NOT use CIDR notation for some assignments, such as:
Especially if the fabric is medium size or large, it could lead to a big waste of IP addresses when the range is defined like that.
Instead, for such use cases it would be nice to do something like this:
in yaml:
In Jinja
The result would be a list of usable IP addresses that can be allocated based on standard mapping rules. The plugin could support multiple ranges, a single range, or even CIDR notation. The result would be the same: an expanded range of IP addresses.
Useful for cases where we don't want to define a pool as an entire subnet but just a range of IP addresses
Describe the solution you would like
Simple filter plugin to cover this feature
Describe alternatives you have considered
Writing my own custom plugin, as the standard ipaddr Ansible filter does not cover this easily
Additional context
No response
Contributing Guide