nautobot / pynautobot

Nautobot Python SDK
https://pynautobot.readthedocs.io/en/latest/index.html
Apache License 2.0
34 stars 29 forks source link

Calling getattr(pynautobot_object, attribute, default_value) with a default argument resets changes #197

Open nrnvgh opened 1 month ago

nrnvgh commented 1 month ago

Environment

Nautobot: 1.6.21 pynautobot: 1.5.2

Details

If you:

  1. Fetch an object from Nautobot (e.g. an Interfaces object which I’ll call iface)
  2. Update one or more attributes of that object (e.g. iface.description = "foo") but do not call save()
  3. Call getattr() on the object for an attribute which does not exist with a supplied default (e.g. getattr(iface, "bad_attribute", "default_value"))

…then the updates are lost because:

  1. pynautobot re-fetches/re-populates the object, blowing away the changes and
  2. the three-argument form of getattr() suppresses the AttributeError

Note: there are other paths to trigger this same issue; using either the two-argument form of getattr() or even just referencing the attribute directly via foo = my_pynautobot_object.bad_attribute will have the same effect provided one wraps the call in a try/except block:

iface = nb_client.dcim.interfaces.get(UUID)
# assume for the moment that the current description "old description"
iface.description = "new description"
try:
    x = iface.bad_attribute
except AttributeError:
    pass

print(iface.description) # will print 'old description'
joewesch commented 1 month ago

This seems like "working as intended" as the simple addition of .save() (as you alluded to) would suffice as a fix. Can you please provide a use case for needing the ability to update a local object's fields without saving first?

nrnvgh commented 1 month ago

For the project I'm currently working on, I'm making Lots™ of changes of various types to our Nautobot data. As an optimization, I make changes to all objects of a particular class and them perform a parallelized .save()across them in one go. This optimization can be hugely beneficial, particularly when latency between client and server is pronounced; as a for-instance, here's output from a script which performs a serialized .save() vs one parallelized across 5 workers:

Serialized update of 64 objects took: 29.62 seconds
Parallel update of 64 objects took: 6.40 seconds

Note that 64 objects is something of a small operation; in this case, it was all the interfaces on two devices. Updating all the interfaces on 20 or 30 devices at once is easily possible for us. Additionally, it feels counterintuitive to me that an ostensibly read-only action (reading an attribute), can result in a write operation as a silent side-effect. Mumble mumble Principle of Least Surprise and so forth. =)

joewesch commented 1 month ago

Would switching to a Bulk Update operation (pynautobot 2.1.0+) benefit you? You could build a list of dictionaries of the objects you want to update and pass that into the .update() method. That way you aren't relying on the individual local objects staying the same.

joewesch commented 1 month ago

Oh, I realize that you are still on Nautobot 1.6 😞

joewesch commented 1 month ago

Nautobot 1.6 (and therefore pynautobot 1.X) is in LTM status so we definitely won't be making any large changes to the code base for those older versions - only bug and security fixes.

nrnvgh commented 1 month ago

Yep, I absolutely get the LTM status here; this "silently updating my object out from under me if I read an object's attributes in a certain way" felt like a bug to me is all. Obviously I can (and have) worked around it, but I didn't want anyone else to get bitten by it if possible.