qgis / QGIS

QGIS is a free, open source, cross platform (lin/win/mac) geographical information system (GIS)
https://qgis.org
GNU General Public License v2.0
10.68k stars 3.02k forks source link

QGIS delete Postgis features after trying to merge them #15668

Closed qgib closed 5 years ago

qgib commented 12 years ago

Author Name: Regis Haubourg (@haubourg) Original Redmine Issue: 6422 Affected QGIS version: master Redmine category:data_provider


Hi all, quite a rare bug, but a severe one here, since it destroys data with no way to recover without backup:

Data is in postgis 1.5, postgres 9.1. There is a primary key defined. See attached sql script to reproduce data, and postgres log file.

Editing data, I cut two multipolygon in half with advanced editing tools, in order to move one part to the another polygon. I use 'merge' tool to add reaffect the cut part to the other polygon. It goes well in qgis. When saving (commiting in postgis), I get following error message, and loose my objects (row are deleted in postgres table!!):


Could not commit changes to layer bug_lost_feature_qgedit

Errors: ERROR: 2 feature(s) not added. SUCCESS: 1 geometries were changed. SUCCESS: 1 feature(s) deleted.

Provider errors: PostGIS error while adding features: ERREUR: la valeur d'une clé dupliquée rompt la contrainte unique « pk_bv_masse_eau » DETAIL: La clé « (eu_cd, version_dce)=(FRFT34, EDL_2013) » existe déjà.




Related issue(s): #15099 (relates), #15262 (duplicates), #15485 (relates), #16002 (duplicates), #22243 (relates) Redmine related issue(s): 5475, 5758, 6164, 6872, 14247


qgib commented 12 years ago

Author Name: Giovanni Manghi (@gioman)


qgib commented 12 years ago

Author Name: Jürgen Fischer (@jef-n)


@addFeatures@ and @deleteFeatures@ are independant operations on provider level. This would need a change in the provider interface. The duplicate key is also a problem on provider level - we currently don't have a way to expose which attributes are part of the primary key and need to be skipped, when copying attributes of existing features to new features. See also #15099.


qgib commented 12 years ago

Author Name: Regis Haubourg (@haubourg)


OK, but saving does call AddFeature before deleteFeature if I read logs well. Any reason not to call DeleteFeature first?

qgib commented 11 years ago

Author Name: Regis Haubourg (@haubourg)


Hi, #16002 points same bug. A little up on my last comment. Can we have provider delete features before trying to add them. It could be a way to prevent adding duplicates. We should also embed this into a begin / rollback on error so that error message is raised and no commit to data is made. Régis

qgib commented 11 years ago

Author Name: Jürgen Fischer (@jef-n)


regis Haubourg wrote:

We should also embed this into a begin / rollback on error so that error message is raised and no commit to data is made.

As said, that's not possible, unless we change the provider interface.

qgib commented 11 years ago

Author Name: Regis Haubourg (@haubourg)


Thanks Jurgen, I maybe should ask differently, sorry. This bug is a blocker since it destroys data. Is changing provider interface hard or complex thing? does it break anything? Who is capable of doing it? Does it help funding some work? Regards Régis

qgib commented 11 years ago

Author Name: Jürgen Fischer (@jef-n)


regis Haubourg wrote:

I maybe should ask differently, sorry. This bug is a blocker since it destroys data. Is changing provider interface hard or complex thing? does it break anything?

I was only referring to the "embed this into begin/rollback" part - just changing the commit order of operation (delete first, insert after) in the vector layer would probably already fix this and not require a big change.

But adding transaction on a higher level would require that we add transactions to the vector layer provider interface and update all the vector data providers (delimitedtext, gpx, grass, memory, mssql, ogr, osm, postgres, spatialite & sqlanywhere) to support it.

Some providers (or their respective backend) might already support transactions (I suppose all database providers do) and some might not (gpx, delimitedtext, memory...) and some might optionally (OGR - depending on the respective datasource there, GRASS?). Some might already use transactions internally and need a change to make it available to the outside.

So "transactions" would probably need to be an optional provider capability.

Who is capable of doing it? Does it help funding some work?

Um, I suppose most of our devs are and I guess funding always help - but the amount of work above is not easy to estimate...

qgib commented 11 years ago

Author Name: Regis Haubourg (@haubourg)


Thanks Jurgen could we try to estimate the benefits of having transaction implemented for providers? I see at least:

Any other ?

Switching delete/insert order is fine for PK constraints, but not for other constraints (geometrytype..). Do we have other issues raised for other constraints? If not, we should keep simple as suggested.
Regis

qgib commented 11 years ago

Author Name: Jürgen Fischer (@jef-n)


regis Haubourg wrote:

Thanks Jurgen could we try to estimate the benefits of having transaction implemented for providers? I see at least:

  • recover of any commit error (network disconnection, constraint error, database failure)

The changes are applied by operation, ie. add features, delete features, change attributes, change geometries.

For postgres each part of a operation is already done in a transaction. If one piece of the operation fails, the whole operation is rolled back, reported as unsuccessful and therefore kept in the editing session.

That means if you add multiple features in one session, you either have all features saved to the database or none. In the latter case the editing session will be kept open (as the addFeatures failed) and the changes are still there and pending - so for the cases above you could still commit the changes, when the database comes back or fix the changes that violate constraints.

If you mix operations in one session, it might lead into a limbo state. The successful operations are applied and removed from the editing session and the unsuccessful parts stay in the editing session.

For postgres, that brings up the question - sure if that worked, it would still be ugly - if trying to commit twice would help in the ticket case. In the first attempt the add fails, but the delete is successful, in the second attempt the add is also successful, because the conflicting feature was already deleted in the first run. If so there at least isn't a data loss - unless you decide to discard the session.

For other providers it might be worse as the provider might not be able to rollback unsuccessful operations completely: the successful parts of the failing operation would be already applied to the data source, but still be kept in the editing session as if they weren't, because the operation wasn't completely successful. So if you decide to the discard the session later, you would be still have a partly changed data source.

qgib commented 11 years ago

Author Name: Giovanni Manghi (@gioman)


qgib commented 11 years ago

Author Name: Jürgen Fischer (@jef-n)


Fixed in changeset "94156deda42421745df16132a9722fe836975c82".