Closed ChrisBuscaglia closed 7 years ago
This 'Exception caught: unable to update row' error can happen with update cursors after calling .updateRow(row). Will look into the specific issue.
This shapefile is 653MB and has 2249 geometry errors in it - mostly self-intersections and some null geometries. We should recommend that people import to File GDB and run Check/Repair Geometry tools on shapefiles if they have issues loading. This has been the best practice for a long time, and when I run CheckGeometry the tool takes ~9.5 minutes to run so I do not recommend adding this type of check into DA processing since it would have to happen for every shapefile loaded every time it is loaded.
This file would also not load to an enterprise gdb since geometry checks are much tighter in sde.
I repaired the geometry for that dataset and ran the tools several times to another file gdb (since the error was on intermediate FCs no need for slower feature service). Assume this resolves the issue unless testing finds something new.
@SteveGrise I used the standard append tool to append the non-repaired SHP to a FGDB FC and it worked successfully.
@ChrisBuscaglia yes, file GDB will accept bad geometries, the comparable test is to try to load to an enterprise GDB, or to run check/repair geometries. SDE is not tolerant of self-intersections and null geometries.
On Wed, Nov 16, 2016 at 1:33 PM, Chris Buscaglia notifications@github.com wrote:
@SteveGrise https://github.com/SteveGrise I used the standard append tool to append the non-repaired SHP to a FGDB FC and it worked successfully.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Esri/data-assistant/issues/84#issuecomment-261031298, or mute the thread https://github.com/notifications/unsubscribe-auth/AEh3BG6yhljwLCKqZ0Zm8UBDEMtQ-PBnks5q-0yXgaJpZM4Krqlu .
@SteveGrise I understand that bad geometries have caused some problems in the past when copying and pasting FCs to enterprise GDBs. I just appended the 900,000 parcels sourced from a SHP to a SQL Server 2014 database without error. If this is failing at the intermediate dataset (which is a FGDB) I'm wondering why the geometries would be an issue anyway. Were you able to reproduce the original issue?
The issue was that the SHAPE_LEN field is in the config file but not the exported dataset. Apparently asking for the shapefile shape_length field did not return the expected result when the file was created.
This led to a larger issue that if fields are not present in the intermediate dataset then they cannot be included in the field calculator. The updateCursor include a field that was not in the dataset. So need to be more careful on shapefile LEN field and also more defensive if source fields in the config file do not exist.
OK - so this has been addressed?
see the check-in above.
@SteveGrise ok, thanks Steve.
Note that much later in processing a geometry error causes the process to fail:
After repairing the geometry I get a similar message:
If I skip simplifying the geometry, the features are deleted but it appears converting the feature class to json fails at the start of the add features process. I don't think this has to do with the geometry, maybe it is just the feature class is too large to convert in one chunk.
@ChrisBuscaglia @NikkiGolding a decision will need to be made about switching to using arcpy.Append instead of the feature service urllib update approach used today. Chris is requesting this change. On the other hand, I have heard concerns about making significant changes in the code before the December release. Note that the plan for the next release after this December release includes using arcpy.Append.
In the short run a decision needs to be made about which direction to take. This Append change will likely not solve the large dataset problem on its own - since all of the errors so far involve issues before the feature service is even updated.
In this one large dataset example it would be simple enough to segregate the data into multiple datasets and then run "Replace Data" on the first layer (where 1=1) and Append for the other layers. Another approach would be to use DA to push the data into a File Geodatabase using that approach and then update the feature service in several chunks. In either case geometry and possibly file size issues are creating most of the issues.
We're at the outer limits of what arcpy supports - large datasets and some of the toughest geometry update challenges. Seems to me like this is getting away from the simple field-mapping tool and trying to solve the worst cases, and the issues we are seeing are likely more ArcGIS issues than DA issues. There's a point for more complex situations where it is more practical to use DA to create intermediate datasets in combination with other arcpy tools.
I will be submitting another issue on Simplify - that I have not been able to resolve yet.
Seems to be failing at setting the progressor bar? The fields are able to be calculated manually in the intermediate dataset.
Attached below is the XML generated
SourceTarget77.zip
Dataset is located here