qwat / qwat-data-model

TEKSI Water module (project QWAT) - PostgreSQL / postgis Datamodel
https://www.teksi.ch
23 stars 24 forks source link

Error using pum during restore with psql on postgres 9.4.5, postgis 2.2.0 #214

Closed ponceta closed 6 years ago

ponceta commented 6 years ago

Trying to update with pum

Everythings going fine except my srs duplicated key error on 9.4.5, postgis 2.2.0

Parameters:
    CLEAN       = 0
    LOCALDIR    = /home/regis/OSLANDIA/projets_locaux/QWAT/local_update_dir_test/
    TMPFILEDUMP = /tmp/qwat_dump
    UPGRADE     = 0

    PUM_VERSION = pum version 0.5.6

    Current QWAT model version = 1.3.1

Initializing qwat comparison db

PUM version: pum version 0.5.6

CREATE EXTENSION
CREATE EXTENSION
ERREUR:  le schéma « qwat_sys » existe déjà

PUM baseline on qwat_comp

Set baseline...OK

Test and upgrade qwat core

Test and upgrade...Dump...OK
Restore...ERROR
b'pg_restore: [archiver (db)] Error while PROCESSING TOC:\npg_restore: [archiver (db)] Error from TOC entry 3971; 0 268291 TABLE DATA spatial_ref_sys postgres\npg_restore: [archiver (db)] COPY failed for table "spatial_ref_sys": ERREUR:  la valeur d\'une cl\xc3\xa9 dupliqu\xc3\xa9e rompt la contrainte unique \xc2\xab spatial_ref_sys_pkey \xc2\xbb\nDETAIL:  La cl\xc3\xa9 \xc2\xab (srid)=(5013) \xc2\xbb existe d\xc3\xa9j\xc3\xa0.\nCONTEXT:  COPY spatial_ref_sys, ligne 1\nWARNING: errors ignored on restore: 1\n'

On the old tool, I could ignore this error, should pum be able to ignore some errors due to postgis versions or should we find a way to fix this? Any postgis masters around?

ponceta commented 6 years ago

This is a known issue on postgis 2.2.0 http://postgis.17.x6.nabble.com/restore-problem-td5009283i20.html https://lists.osgeo.org/pipermail/postgis-users/2015-November/040950.html

I had same error previously: https://github.com/qwat/qwat-data-model/issues/145#issuecomment-268956627

marioba commented 6 years ago

Hi @ponceta, if you cannot install a more recent version of postgis, I've created a small workaround. I added an option (-x) to the pum restore and pum test-and-upgrade commands, which causes the program to continue even if pg_restore returns an error.

Let me know if you imagined a more elaborate solution, but I wonder if it's worth it for a postgis bug fixed 2 years ago.

It's already available in the pum's version 0.5.8, installable with pip.

ponceta commented 6 years ago

Hi @marioba,

I updated pum to 0.5.8 and a -x option to the pum test-and-upgrade in the upgrade_db.sh script

This gives :

image

qwat_comp is correctly initialized in 1.3.1

qwat_test is correctly restored from qwat_prod but no deltas are applied.

qwat_prod remains untouched.

marioba commented 6 years ago

@ponceta your qwat_prod db has a baseline set in the table qwat_sys.info?

ponceta commented 6 years ago

@marioba No, there was actually no baseline on the 1.0.0 Model since the qwat_sys.version was the reference table.

What content should I put in to get this work?

marioba commented 6 years ago

@ponceta you can use pum baseline command to create the table and set the baseline into your qwat_prod db before the test-and-upgrade command.

ponceta commented 6 years ago

I added the baseline and it seems to process the upgrade :

pum baseline -p qwat_prod -t qwat_sys.info -d delta/ -b 0.1.0

     Applying delta 1.0.0... OK
     Applying delta 1.0.1... OK
     Applying delta 1.1.0... OK
     Applying delta 1.1.1... OK
     Applying delta 1.2.0... OK
     Applying delta 1.2.1... OK
     Applying delta 1.2.2... OK
     Applying delta 1.2.3... OK
     Applying delta 1.2.4... OK
     Applying delta 1.2.5... OK
     Applying delta 1.2.6... OK
     Applying delta 1.2.7... OK
     Applying delta 1.2.8... 

Message from DropViews128: Dropping views
...
     Applying delta 1.2.8... OK
     Applying delta 1.3.0... OK
     Applying delta 1.3.1... OK
     Applying post-all.py... Message from RecreateViewsAndFunctions: Reloading views and functions

The update seems to have been executed without error.

The qwat_sys.info tables seems great on both test and prod.

I'll run a few tests of coherency, but this is really nice! Thanks!

marioba commented 6 years ago

Good, I'm glad to hear that. You're welcome.

ponceta commented 6 years ago

I've encountered two small bugs:

It seems views and installation_type have not been recreated when applying deltas to the prod db, despite being well recreated on the test db (they have been dropped correctly btw), there's maybe a target issue somewhere.

qwat_comp and qwat_test are not conform, at least for valves which have 2 more constraints in qwat_comp that are probably not yet in the deltas : chk_valve_altitude_obj_ref and chk_valve_altitude_precisionalti

ponceta commented 6 years ago

https://www.diffchecker.com/GhfpiuOJ

See the link to find differences between updated prod (left) vs qwat_comp (right)

sylvainbeo commented 6 years ago

@ponceta Arnaud, from what did you get that ouput ? (https://www.diffchecker.com/GhfpiuOJ)

ponceta commented 6 years ago

@sylvainbeo pgadmin?

left is my updated db and right is the comp db.

sylvainbeo commented 6 years ago

ok. As you said, there're 2 constraint missing.

` CONSTRAINT chk_valve_altitude_obj_ref CHECK (fk_object_reference IS NOT NULL OR altitude IS NULL),

  1. CONSTRAINT chk_valve_altitude_precisionalti CHECK (fk_precisionalti IS NOT NULL OR altitude IS NULL), `

I guess we have to add them in the delta. I have a look.

sylvainbeo commented 6 years ago

What i don't understand, is how the PUM tests can actually work if there are those 2 constraints missing.

haubourg commented 6 years ago

@sylvainbeo, for any reason it seems at least in upgrade_db.sh, and probably also in travis, we ignore many database objects in the comparison test : See -i option : pum test-and-upgrade -pp qwat_prod -pt qwat_test -pc qwat_comp -t qwat_sys.info -d delta/ -f $TMPFILEDUMP -i columns constraints views sequences indexes triggers functions rules

let's remove the -i option and see what we get

sylvainbeo commented 6 years ago

@ponceta From what version do you upgrade your DB ?

sylvainbeo commented 6 years ago

@ponceta @haubourg Ok i cleaned the travis and upgrade script to enable more comparison. I just left

-i views rules

(https://travis-ci.org/qwat/qwat-data-model/builds/326819034)

@ponceta I need to know now from which version you try to upgrade your DB.

ponceta commented 6 years ago

@sylvainbeo I upgrade my db from 0.1.0 which is the first qwat release.

ponceta commented 6 years ago

Is this normal that the dump is 5 times bigger after update?

image

I uploaded the two files on the share oslandia if you want to give it a try :

https://share.oslandia.net/ws-qWAT/technique/PULLY_QWAT

There are many incoherences in terms of number of sequences, functions. trigger functions, ...

haubourg commented 6 years ago

Hi Arnaud, I restored the 1.3.1 db and listed the size of db relations using


SELECT nspname || '.' || relname AS "relation",
    pg_size_pretty(pg_total_relation_size(C.oid)) AS "total_size"
  FROM pg_class C
  LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
  WHERE nspname NOT IN ('pg_catalog', 'information_schema')
    AND C.relkind <> 'i'
    AND nspname !~ '^pg_toast'
  ORDER BY pg_total_relation_size(C.oid) DESC

I get this result, showing mostly TOAST storage for large objects and logged_actions.


pg_toast.pg_toast_33532847;652 MB
qwat_sys.logged_actions;226 MB
pg_toast.pg_toast_33532847_index;7344 kB
qwat_od.pipe;5808 kB
public.spatial_ref_sys;4456 kB
qwat_od.node;4096 kB
qwat_sys.logged_actions_action_tstamp_tx_stm_idx;3544 kB
qwat_sys.logged_actions_action_idx;3544 kB
qwat_sys.logged_actions_relid_idx;3544 kB
qwat_sys.logged_actions_pkey;3544 kB
qwat_od.valve;1272 kB
qwat_od.node_geoidx;1128 kB
qwat_od.node_geoidx_alt2;1128 kB
qwat_od.node_geoidx_alt1;1120 kB
qwat_od.network_element;1120 kB
qwat_od.pipe_geoidx_alt1;688 kB
qwat_od.pipe_geoidx_alt2;672 kB
qwat_od.pipe_geoidx;640 kB
qwat_od.fki_node_fk_district;456 kB
qwat_od.fki_node_fk_pressurezone;456 kB
qwat_od.node_pkey;456 kB
pg_toast.pg_toast_2618;400 kB
qwat_dr.constructionpoint;352 kB
qwat_od.part;336 kB
qwat_od.fki_element_fk_precision;304 kB
qwat_od.fki_element_fk_folder;304 kB
qwat_od.fki_network_element_label_2_visible;304 kB
qwat_od.fki_element_fk_distributor;304 kB
qwat_od.fki_network_element_label_1_visible;304 kB
qwat_od.fki_element_fk_object_reference;304 kB
qwat_od.network_element_pkey;304 kB
qwat_od.fki_element_fk_precisionalti;304 kB
qwat_od.fki_element_fk_status;304 kB
qwat_od.valve_geoidx;272 kB
qwat_od.valve_geoidx_alt2;272 kB
qwat_od.valve_geoidx_alt1;272 kB
qwat_od.fki_pipe_fk_bedding;248 kB
qwat_od.fki_pipe_fk_precision;248 kB
qwat_od.fki_pipe_fk_parent;248 kB
qwat_od.fki_pipe_label_2_visible;248 kB
qwat_od.fki_pipe_fk_node_a;248 kB
qwat_od.fki_pipe_fk_installmethod;248 kB
qwat_od.fki_pipe_fk_distributor;248 kB
qwat_od.fki_pipe_fk_status;248 kB
qwat_od.fki_pipe_schema_force_visible;248 kB
qwat_od.fki_pipe_fk_protection;248 kB
qwat_od.fki_pipe_label_1_visible;248 kB
qwat_od.fki_pipe_fk_watertype;248 kB
qwat_od.fki_pipe_fk_material;248 kB
qwat_od.fki_pipe_fk_node_b;248 kB
qwat_od.fki_pipe_fk_function;248 kB
qwat_od.fki_pipe_fk_district;248 kB
qwat_od.fki_pipe_fk_folder;248 kB
qwat_od.fki_pipe_fk_pressurezone;248 kB
qwat_od.pipe_pkey;248 kB
qwat_dr.constructionpoint_geoidx;232 kB
public.spatial_ref_sys_pkey;200 kB
qwat_od.fki_part_fk_type;192 kB
qwat_od.part_pkey;192 kB
qwat_od.fki_part_fk_pipe;192 kB
qwat_od.fki_valve_fk_precisionalti;120 kB
qwat_od.fki_valve_fk_type;120 kB
qwat_od.fki_valve_label_1_visible;120 kB
qwat_od.fki_valve_schema_force_visible;120 kB
qwat_od.fki_valve_fk_pipe;120 kB
qwat_od.fki_valve_fk_status;120 kB
qwat_od.fki_valve_fk_handle_precisionalti;120 kB
qwat_od.fki_valve_fk_object_reference;120 kB
qwat_od.fki_valve_fk_distributor;120 kB
qwat_od.valve_pkey;120 kB
qwat_od.fki_valve_fk_district;120 kB
qwat_od.fki_valve_fk_folder;120 kB
qwat_od.fki_valve_label_2_visible;120 kB
qwat_od.fki_valve_fk_pressurezone;120 kB
qwat_od.fki_valve_fk_valve_actuation;120 kB
qwat_od.fki_valve_fk_function;120 kB
qwat_od.fki_valve_fk_precision;120 kB
qwat_od.fki_valve_fk_handle_precision;120 kB
pg_toast.pg_toast_2619;120 kB
qwat_dr.constructionpoint_pkey;112 kB
qwat_dr.fki_constructionpoint_fk_object_reference;112 kB
qwat_vl.pipe_material;104 kB
pg_toast.pg_toast_33532739;48 kB
qwat_od.printmap;40 kB
qwat_od.leak;40 kB
qwat_od.pressurezone;32 kB
qwat_od.hydrant;32 kB
qwat_vl.vl_pipe_material_pk;32 kB
qwat_od.fki_hydrant_fk_provider;32 kB
qwat_od.fki_hydrant_fk_model_sup;32 kB
qwat_od.hydrant_pkey;32 kB
qwat_od.fki_hydrant_fk_material;32 kB
qwat_od.fki_hydrant_fk_model_inf;32 kB
qwat_od.fki_hydrant_fk_output;32 kB
pg_toast.pg_toast_1255;24 kB
pg_toast.pg_toast_33532640;24 kB
qwat_od.tank_pkey;16 kB
qwat_od.fki_subscriber_fk_type;16 kB
qwat_od.fki_tank_cistern1type;16 kB
qwat_od.subscriber_pkey;16 kB

Concerning TOAST, I suspect we have here the physical storage for large geometries or pictures. I investigate on that.

Concerning logged actions, was it empty before or already filled? I just want to be sure that the upgrade process doesn't log data.

haubourg commented 6 years ago

this query confirms we have an issue with the logged_actions:

SELECT *, pg_size_pretty(total_bytes) AS total
    , pg_size_pretty(index_bytes) AS INDEX
    , pg_size_pretty(toast_bytes) AS toast
    , pg_size_pretty(table_bytes) AS TABLE
  FROM (
  SELECT *, total_bytes-index_bytes-COALESCE(toast_bytes,0) AS table_bytes FROM (
      SELECT c.oid,nspname AS table_schema, relname AS TABLE_NAME
              , c.reltuples AS row_estimate
              , pg_total_relation_size(c.oid) AS total_bytes
              , pg_indexes_size(c.oid) AS index_bytes
              , pg_total_relation_size(reltoastrelid) AS toast_bytes
          FROM pg_class c
          LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
          WHERE relkind = 'r'
  ) a
) a
order by "table_bytes" desc;

First line shows this:

image

So TOAST size is entirely caused by the raw hstore of the logs.

haubourg commented 6 years ago

Looking at the last lines of the audit logs, clearly some delta files are applying updates without deactivating the audit log triggers, which causes them to grow when they should not.

As a solution, we will need to list all logging triggers and explicitly deactivate them in updates, and publish the snippet in the doc and the sources for ETL loading tasks.

As a workaround, deleting all logged action by date will do the job for this issue.

haubourg commented 6 years ago

Concerning the sequences , functions and trigger incoherences, can you be more specific?

ponceta commented 6 years ago

Regarding trigger functions, these should have been deleted (I suppose):

ft_installation_chamber_delete()
ft_installation_chamber_insert()
ft_installation_chamber_update()
ft_installation_pressurecontrol_delete()
ft_installation_pressurecontrol_insert()
ft_installation_pressurecontrol_update()
ft_installation_pump_delete()
ft_installation_pump_insert()
ft_installation_pump_update()
ft_installation_source_delete()
ft_installation_source_insert()
ft_installation_source_update()
ft_installation_tank_delete()
ft_installation_tank_insert()
ft_installation_tank_update()
ft_installation_treatment_delete()
ft_installation_treatment_insert()
ft_installation_treatment_update()

Regarding sequences (which should have been deleted) : installation_id_seq

Regarding functions (which should have been deleted?): fn_valve_set_orientation(integer)

ponceta commented 6 years ago

@haubourg to downsize my db I should then truncate logged_actions?

haubourg commented 6 years ago

Truncate will loose even actual data you might want to keep. I would advice the delete by date then a backup of that table, and finally a truncate so that you have logs compatible with the reworked audit system. @sylvainbeocan you check the triggers that should be deleted and diagnose that?

sylvainbeo commented 6 years ago

@ponceta @haubourg Sorry, i was away today. Yes i will have a look at that.

There's something i don't understand in the process. If you make an upgrade from version 0.1.0, Travis do the same, and there are no errors. Thus, i guess the deltas must be wrong with you specific data (?)

sylvainbeo commented 6 years ago

@ponceta Still debuging the process, i encounter that error:


Applying delta 1.1.0... cannot drop view qwat_od.vw_element_valve because other objects depend on it
DETAIL:  view qwat_od.vw_export_valve_sigip depends on view qwat_od.vw_element_valve
HINT:  Use DROP ... CASCADE to drop the dependent objects too.

That's very weird you did not have that error.

ponceta commented 6 years ago

I removed all sigip views manually since we have actually no guidelines on datamodel customizations.

ponceta commented 6 years ago

@haubourg there was no logging in 0.1.0 so truncate was not a problem.

sylvainbeo commented 6 years ago

@ponceta Just a question, why were you on 0.1.0 ? I thought you had at leat a 1.0 version (?)

ponceta commented 6 years ago

the 1st version was tagged 0.1.0 in the old sys.version table. It was to test the update mechanism if I remind it correctly, which was realeased with delta 1.0.0.

But we were indeed on the first version (which was not called 1.0.0).

ponceta commented 6 years ago
Impossible de valider les changements pour la couche ouvrage

Erreurs : ERREUR : 1 modification de valeur d'attribut n'a pas été effectué.
Erreur du fournisseur de données :
Erreur PostGIS lors de la modification d'attribut : ERREUR: l'enregistrement « new » n'a pas de champs « schema_force_visible »
CONTEXT: instruction SQL « UPDATE qwat_od.vw_node_element SET
fk_district = NEW.fk_district,
fk_pressurezone = NEW.fk_pressurezone,
fk_printmap = NEW.fk_printmap,
_printmaps = NEW._printmaps,
_geometry_alt1_used = NEW._geometry_alt1_used,
_geometry_alt2_used = NEW._geometry_alt2_used,
_pipe_node_type = NEW._pipe_node_type,
_pipe_orientation = NEW._pipe_orientation,
_pipe_schema_visible = NEW._pipe_schema_visible,
geometry = NEW.geometry,
geometry_alt1 = NEW.geometry_alt1,
geometry_alt2 = NEW.geometry_alt2,
update_geometry_alt1 = NEW.update_geometry_alt1,
update_geometry_alt2 = NEW.update_geometry_alt2,
identification = NEW.identification,
fk_distributor = NEW.fk_distributor,
fk_status = NEW.fk_status,
fk_folder = NEW.fk_folder,
fk_locationtype = NEW.fk_locationtype,
fk_precision = NEW.fk_precision,
fk_precisionalti = NEW.fk_precisionalti,
fk_object_reference = NEW.fk_object_reference,
altitude = NEW.altitude,
year = NEW.year,
year_end = NEW.year_end,
orientation = NEW.orientation,
remark = NEW.remark,
schema_force_visible = NEW.schema_force_visible,
label_1_visible = NEW.label_1_visible,
label_1_x = NEW.label_1_x,
label_1_y = NEW.label_1_y,
label_1_rotation = NEW.label_1_rotation,
label_1_text = NEW.label_1_text,
label_2_visible = NEW.label_2_visible,
label_2_x = NEW.label_2_x,
label_2_y = NEW.label_2_y,
label_2_rotation = NEW.label_2_rotation,
label_2_text = NEW.label_2_text
WHERE id = OLD.id »
fonction PL/pgsql qwat_od.ft_element_installation_update(), ligne 3 à instruction SQL

Error in 1.3.1, qwat_od.ft_element_installation_update() trigger function has not been updated correctly?

ponceta commented 6 years ago

https://www.diffchecker.com/Dii1xmUs

It seems that this has not been updated at all.

ponceta commented 6 years ago

image

sylvainbeo commented 6 years ago

@ponceta I'm working on it.

sylvainbeo commented 6 years ago

@ponceta in your upgradedb.sh what options did you have for you pum command (pum test-and-upgrade ...) ?

-i columns constraints views sequences indexes triggers functions rules

@marioba the -i option will affect only the difference process or also the application on the DB ?

marioba commented 6 years ago

@sylvainbeo the -i option only affects the check for differences.

sylvainbeo commented 6 years ago

@ponceta Ok, by taking your initial DUM and,

I've got a pretty good process. Differences are only on the "damage" table. I'm going to check that.

sylvainbeo commented 6 years ago

@ponceta I wonder how you could pass the upgrade_db.sh with that table damage on your DB. Could you show me the pum command in you upgradedb.sh ?

ponceta commented 6 years ago

@sylvainbeo I went through the same steps.

base line / sigip views / damage table

I add the -i table option and went through with my damage table (which is in short a copy of qwat_od.leak for historisation in Pully)

pum test-and-upgrade -x -pp qwat_prod -pt qwat_test -pc qwat_comp -t qwat_sys.info -d delta/ -f $TMPFILEDUMP -i tables columns constraints views sequences indexes triggers functions rules

sylvainbeo commented 6 years ago

@ponceta But did you added a local delta file for the damage table ? The process can not be successful without the definition of that table somewhere.

ponceta commented 6 years ago

I've followed @3nids 's lead for the SIGE extension and created a SIGIP extension, with insertion of views and views dropping :

https://github.com/VilleDePully/qwat-data-model/pull/9/files

I'll add other customizations there.

haubourg commented 6 years ago

Oh I just discover now the concept of QWAT extensions. Did we discuss that at the PSC already before I jumped in? From what we discussed in late 2017, you mean that Pully customization are stored as a QWAT extension?
We never specified or test PUM with that extension concept. @3nids can you point to a documentation about extension guidelines ?

ponceta commented 6 years ago

@haubourg SIRE extension was initially only a sql script stored in qwat-data-model/extensions.

Until today, Pully's customizations were only stored locally as sql scripts. (and it is almost the same for Lausanne)

To my knowledge, extensions were not discussed much before late 2017. That's why we adressed the customizations guidelines to the last PSC. Now that the PUM comes in, we have to get some structuring rules for these extensions/customizations, otherwise we will have conflicts when updating with PUM, as it is right now.

This is valid for QWAT and QGEP and should be adressed in a separated issue.

haubourg commented 6 years ago

Customization guidelines should be discussed here https://github.com/qwat/qwat-data-model/issues/151 . I don't find or remember any mention of extension there . Did you and Denis defined that offline? Where can we find the doc about that ? From what we discussed in december, it was agreed that customization files should not be inside the data-model repo or any git fork, but in separate files. That' s why I'm surprised we find core extensions inside QWAT data model core.

ponceta commented 6 years ago

Customization =! Extension

Extension is a piece of code shared by some but not all qwat users

Actually, Customizations ARE in separated sql scripts, not integrated on git. But this makes the PUM fail when checking a customized database that has to be upgraded as commented by @sylvainbeo https://github.com/qwat/qwat-data-model/issues/214#issuecomment-364855468

Having customized views blocks me from refreshing the views as discussed here

That's why I just added some code logic, but this will change with the guidelines and rules that we will discuss and define in https://github.com/qwat/qwat-data-model/issues/151

ponceta commented 6 years ago

Regarding trigger functions as I commented in https://github.com/qwat/qwat-data-model/issues/214#issuecomment-356912821

I have 68 non updated trigger functions and I should have, according to qwat_comp only 50 of those.

sylvainbeo commented 6 years ago

@haubourg @ponceta Ok. We just spoke with Arnaud, and the last thing to correct here is to clean the old TRIGGER's functions ft_installation_xxxx... i 'll check that (use a new delta ? or do the drop in the views's scripts ?)

sylvainbeo commented 6 years ago

A bit of explanation : In earlier versions of Qwat there were trigger functions which were generated automatically. It is always like that. But the model has changed, and now, some of the triggers functions are no needed anymore... and they are not deleted by the upgrading process.