Closed JonathanNathanson closed 7 months ago
Thanks for the submission! This definitely looks wrong, we should account for the new data structure.
@JonathanNathanson can you confirm this is addressed now with #395?
@jdrew82 I'm seeing this error when running Example Data Source:
celery_worker-1 | [2024-03-19 21:54:54,821: ERROR/ForkPoolWorker-1] Task nautobot_ssot.jobs.examples.ExampleDataSource[4eba9feb-bd28-48b4-8f71-d839275a0f2b] raised unexpected: ValidationError(model='LocationModel', errors=[{'loc': ('location_type__name',), 'msg': 'field required', 'type': 'value_error.missing'}, {'loc': ('status__name',), 'msg': 'field required', 'type': 'value_error.missing'}])
celery_worker-1 | Traceback (most recent call last):
celery_worker-1 | File "/usr/local/lib/python3.9/site-packages/celery/app/trace.py", line 477, in trace_task
celery_worker-1 | R = retval = fun(*args, **kwargs)
celery_worker-1 | File "/usr/local/lib/python3.9/site-packages/nautobot/extras/jobs.py", line 153, in __call__
celery_worker-1 | return self.run(*args, **deserialized_kwargs)
celery_worker-1 | File "/opt/nautobot/.local/lib/python3.9/site-packages/nautobot_ssot/jobs/examples.py", line 418, in run
celery_worker-1 | super().run(dryrun, memory_profiling, *args, **kwargs)
celery_worker-1 | File "/opt/nautobot/.local/lib/python3.9/site-packages/nautobot_ssot/jobs/base.py", line 316, in run
celery_worker-1 | self.sync_data(memory_profiling)
celery_worker-1 | File "/opt/nautobot/.local/lib/python3.9/site-packages/nautobot_ssot/jobs/base.py", line 144, in sync_data
celery_worker-1 | self.load_target_adapter()
celery_worker-1 | File "/opt/nautobot/.local/lib/python3.9/site-packages/nautobot_ssot/jobs/examples.py", line 428, in load_target_adapter
celery_worker-1 | self.target_adapter.load()
celery_worker-1 | File "/opt/nautobot/.local/lib/python3.9/site-packages/nautobot_ssot/jobs/examples.py", line 351, in load
celery_worker-1 | loc_model = self.location(
celery_worker-1 | File "/opt/nautobot/.local/lib/python3.9/site-packages/pydantic/main.py", line 341, in __init__
celery_worker-1 | raise validation_error
celery_worker-1 | pydantic.error_wrappers.ValidationError: 2 validation errors for LocationModel
celery_worker-1 | location_type__name
celery_worker-1 | field required (type=value_error.missing)
celery_worker-1 | status__name
celery_worker-1 | field required (type=value_error.missing)
Certainly seems to get further through the job than before.
A side-note - the instance I ran it in already had a LocationType with the name Building. I'll test it with a clean database and update this comment.
Update: Ok, so it ran when on a clean database and executed with a "success" status. The celery worker did moan though, and there are errors in the Sync logs:
celery_worker-1 | [2024-03-19 22:01:00,863: ERROR/ForkPoolWorker-1] Couldn't find 'status' instance behind 'status' with: {'name': 'NULL'}.
celery_worker-1 | [2024-03-19 22:01:00,864: WARNING/ForkPoolWorker-1] No object resulted from sync, will not process child objects.
celery_worker-1 | [2024-03-19 22:01:00,865: ERROR/ForkPoolWorker-1] Couldn't find 'status' instance behind 'status' with: {'name': 'NULL'}.
celery_worker-1 | [2024-03-19 22:01:00,866: WARNING/ForkPoolWorker-1] No object resulted from sync, will not process child objects.
celery_worker-1 | [2024-03-19 22:01:00,867: ERROR/ForkPoolWorker-1] Couldn't find 'status' instance behind 'status' with: {'name': 'NULL'}.
celery_worker-1 | [2024-03-19 22:01:00,867: WARNING/ForkPoolWorker-1] No object resulted from sync, will not process child objects.
.... circa 100 lines of the same
The records created were LocationTypes and Locations. I didn't get any Prefixes or Tenants - the source code for the example job suggests I should have. And indeed, the job logs do show that the adapter loaded records for prefixes and tenants, but they did not get created.
@JonathanNathanson can you test the fixes in #405 and let us know if you're still running into these issues?
I re-ran this against the instance I ran it on previously - so there was some data in there but not all. It kicked up a whole heap of errors. I'm not sure of the best way to export the full list?
Some examples:
2024-03-27 09:13 [Nautobot (remote) → Nautobot, 2024-03-27 09:12](http://localhost:8080/plugins/ssot/history/a8fe540d-681b-471c-a5d0-d50d5e0da9c3/) update location ORD02 error
{
"+": {
"parent__name": "United States",
"tenant__name": "Nautobot Airports",
"parent__location_type__name": "Region"
},
"-": {
"parent__name": null,
"tenant__name": null,
"parent__location_type__name": null
}
}
Validated save failed for Django object. Parameters: {'parent__name': 'United States', 'parent__location_type__name': 'Region', 'tenant__name': 'Nautobot Airports'}
2024-03-27 09:13 [Nautobot (remote) → Nautobot, 2024-03-27 09:12](http://localhost:8080/plugins/ssot/history/a8fe540d-681b-471c-a5d0-d50d5e0da9c3/) create device ang01-pdu-12__ANG01__United States error
{
"+": {
"serial": "",
"asset_tag": null,
"role__name": "PDU",
"status__name": "Active",
"tenant__name": "Nautobot Baseball Stadiums",
"platform__name": null,
"device_type__model": "APDU9941",
"location__location_type__name": "Site",
"device_type__manufacturer__name": "APC",
"location__parent__location_type__name": "Region"
}
}
Couldn't find 'location' instance behind 'location' with: {'name': 'ANG01', 'parent__name': 'United States', 'location_type__name': 'Site', 'parent__location_type__name': 'Region'}.
I then cleaned out the database (deleted all device types, manufacturers, platforms, devices, locations, location types) and re-ran it. All records were created successfully.
So only seems to be issues with syncing after the initial sync, and perhaps because the data was bad from the previous import from the previously tested version of the SSOT job?
As a side note, I can see in the job logs that it loads a whole bunch of prefixes from the remote, but they don't get created.
Environment
Expected Behavior
For the example job "Example Data Source" to sync against the default parameters (https://demo.nautobot.com) without producing errors.
Observed Behavior
The job fails to run with a return value of:
{'exc_type': 'KeyError', 'exc_module': 'builtins', 'exc_message': ['label']}
The traceback looks like this, indicating that the job is expecting a different data format to that which is being returned via the source system's API:
Steps to Reproduce
latest-py3.9
tag, with plugin_requirements.txt includingnautobot_ssot