Open jeffkala opened 11 months ago
I think object storage is the correct design choice, but more important to have a scalable choice, just want to ensure we don't limit any ideas.
I really like this as well especially for large networks with fairly regular changes (e.g. interface updates).
Would be interested to see if with something like this if we could eliminate the need to store actual and intended in the DB (I think we still do that). I've not heard of performance issues related to that table's size but could see it happening at large scale.
There was discussion at one point for native git time travel type functionality directly in golden config for viewing diffs of config over time. If that's on the table for future implementation there may need to be some additional scope to account for versioning in XYZ supported backends (e.g. once versioning is enabled in S3 how do you programmatically retrieve an older version to perform the diff). All possible but may require similar paradigms as secrets + secret providers.
There was discussion at one point for native git time travel type functionality directly in golden config for viewing diffs of config over time.
Good point, we need to consider this in any design.
All possible but may require similar paradigms as secrets + secret providers.
Agreed
Environment
Proposed Functionality
Extend the nautobot core feature (datasources) and expand it to also support object storage. (think s3, azure storages) etc. This would be an "instead" of feature.
Would most likely be a nautobot core feature, but could investigate this in an app first.
Should look to see if native django-storages has anyway to define this.
Use Case
I'm a network engineer on a network with 100s of thousands of devices, git push/pull mechanisms and git in general begins to hit scaling problems. Being able to substitute the use of backup repos with object storage buckets could allow for a better scalability of larger networks.