Closed amoeba closed 4 years ago
Interesting. Given that MNStorage.update
provides a SystemMetadata
document as a parameter of the payload, why do we then need to immediately go retrieve that info in steps 2 and 3? If there is a version conflict, presumably 1 would fail and need to be retried. If step 1 succeeds, then presumably the sysmeta that was sent was accepted as valid...
I don't know about the rationale for the getSystemMetadata
call immediately after the MNStorage.update
call. It may be unnecessary. Though I'll note that the MN mutates the System Metadata after MNStorage.create/update
with properties the application may need later on down the line so a fetch after create/update doesn't sound unreasonable just yet.
Turned out to be a bit of a red herring and was actually caused by a double-save issue which was itself caused by a race condition in DataPackage.save()'s logic. See https://github.com/NCEAS/metacatui/commit/0455281ca18b9c686680ff799cfb1ece918d3a7c for a writeup. I modified the logic to guard against the race condition and have tested that the double-save goes away.
I'm seeing the following behavior on my
feature-replace-item
branch.Scanning my Network pane I see a culprit for the breakage:
MNStorage.update
fires off, is successful (HTTP 200), returns$NEWID
in response like it should. Good so far.MNStorage.getSystemMetadata($NEWID)
404s withMNStorage.getSystemMetadata($NEWID)
200s with our system metadata in the responseIt looks like the error at (2) breaks the overall save process even though the failing request appears to get retried and succeeds.
DataPackage.save
has retry logic so I'm going to start debugging near that to see why the save process fails even though it's retried like we expect.