Indicia-Team / warehouse

GNU General Public License v3.0
5 stars 3 forks source link

Training samples with occurrence photos fail to upload #493

Closed kazlauskis closed 1 year ago

kazlauskis commented 1 year ago

Uploading a training sample record like this fails. Uploading non-training records works OK.

POST https://warehouse1.indicia.org.uk/index.php/services/rest/samples

{
    "values": {
        "date": "21/9/2023",
        "entered_sref_system": 4326,
        "entered_sref": "54.9686000, -1.6101500",
        "survey_id": 374,
        "training": "t"
    },
    "occurrences": [
        {
            "values": {
                "taxa_taxon_list_id": 179833
            },
            "media": [
                {
                    "values": {
                        "queued": "16/95/29/650c2121882466.87064374.jpeg"
                    }
                }
            ]
        }
    ]
}

Error:

{
    "code": 400,
    "status": "Bad Request",
    "message": {
        "occurrence_medium:general": "There was an SQL error: ERROR:  insert or update on table \"occurrence_media\" violates foreign key constraint \"fk_occurrence_media_occurrences\"\nDETAIL:  Key (occurrence_id)=(353702242) is not present in table \"occurrences\". - INSERT INTO \"occurrence_media\" (\"created_on\", \"created_by_id\", \"updated_on\", \"updated_by_id\", \"occurrence_id\", \"licence_id\", \"media_type_id\", \"path\") VALUES ('20230921 11:57:32', 294899, '20230921 11:57:32', 294899, 353702242, NULL, 3744, '16/95/29/650c2193d18729.90853143.jpeg')",
        "sample:general": "An error occurred: There was an SQL error: ERROR:  current transaction is aborted, commands ignored until end of transaction block - INSERT INTO cache_occurrences_functional(\n            id, sample_id, website_id, survey_id, input_form, location_id,\n            location_name, public_geom,\n            date_start, date_end, date_type, created_on, updated_on, verified_on,\n            created_by_id, group_id, taxa_taxon_list_id, preferred_taxa_taxon_list_id,\n            taxon_meaning_id, taxa_taxon_list_external_key, family_taxa_taxon_list_id,\n            taxon_group_id, taxon_rank_sort_order, record_status, record_substatus,\n            certainty, query, sensitive, private, release_status,\n            marine_flag, freshwater_flag, terrestrial_flag, non_native_flag, data_cleaner_result,\n            training, zero_abundance, licence_id, import_guid, confidential, external_key,\n            taxon_path, blocked_sharing_tasks, parent_sample_id, verification_checks_enabled,\n            media_count, identification_difficulty)\nSELECT distinct on (o.id) o.id, o.sample_id, o.website_id, s.survey_id, COALESCE(sp.input_form, s.input_form), s.location_id,\n    case when o.confidential=true or o.sensitivity_precision is not null or s.privacy_precision is not null\n        then null else coalesce(l.name, s.location_name, lp.name, sp.location_name) end,\n    reduce_precision(coalesce(s.geom, l.centroid_geom), o.confidential, greatest(o.sensitivity_precision, s.privacy_precision)) as public_geom,\n    s.date_start, s.date_end, s.date_type, o.created_on, o.updated_on, o.verified_on,\n    o.created_by_id, coalesce(s.group_id, sp.group_id), o.taxa_taxon_list_id, cttl.preferred_taxa_taxon_list_id,\n    cttl.taxon_meaning_id, cttl.external_key, cttl.family_taxa_taxon_list_id,\n    cttl.taxon_group_id, cttl.taxon_rank_sort_order, o.record_status, o.record_substatus,\n    case when certainty.sort_order is null then null\n        when certainty.sort_order <100 then 'C'\n        when certainty.sort_order <200 then 'L'\n        else 'U'\n    end,\n    null,\n    o.sensitivity_precision is not null, s.privacy_precision is not null, o.release_status,\n    cttl.marine_flag, cttl.freshwater_flag, cttl.terrestrial_flag, cttl.non_native_flag, null,\n    o.training, o.zero_abundance, s.licence_id, o.import_guid, o.confidential, o.external_key,\n    ctp.path,\n    CASE WHEN u.allow_share_for_reporting\n      AND u.allow_share_for_peer_review AND u.allow_share_for_verification\n      AND u.allow_share_for_data_flow AND u.allow_share_for_moderation\n      AND u.allow_share_for_editing\n    THEN null\n    ELSE\n      ARRAY_REMOVE(ARRAY[\n        CASE WHEN u.allow_share_for_reporting=false THEN 'R' ELSE NULL END,\n        CASE WHEN u.allow_share_for_peer_review=false THEN 'P' ELSE NULL END,\n        CASE WHEN u.allow_share_for_verification=false THEN 'V' ELSE NULL END,\n        CASE WHEN u.allow_share_for_data_flow=false THEN 'D' ELSE NULL END,\n        CASE WHEN u.allow_share_for_moderation=false THEN 'M' ELSE NULL END,\n        CASE WHEN u.allow_share_for_editing=false THEN 'E' ELSE NULL END\n      ], NULL)\n    END,\n    s.parent_id,\n    w.verification_checks_enabled,\n    (SELECT COUNT(om.*) FROM occurrence_media om WHERE om.occurrence_id=o.id AND om.deleted=false),\n    (SELECT cts.identification_difficulty FROM cache_taxon_searchterms cts where cts.taxa_taxon_list_id=o.taxa_taxon_list_id AND cts.simplified=false)\nFROM occurrences o\n\nLEFT JOIN cache_occurrences_functional co on co.id=o.id\nJOIN websites w ON w.id=o.website_id AND w.deleted=false\nJOIN samples s ON s.id=o.sample_id AND s.deleted=false\nLEFT JOIN samples sp ON sp.id=s.parent_id AND  sp.deleted=false\nLEFT JOIN locations l ON l.id=s.location_id AND l.deleted=false\nLEFT JOIN locations lp ON lp.id=sp.location_id AND lp.deleted=false\nJOIN users u ON u.id=o.created_by_id -- deleted users records still included.\nJOIN cache_taxa_taxon_lists cttl ON cttl.id=o.taxa_taxon_list_id\nLEFT JOIN cache_taxon_paths ctp ON ctp.external_key=cttl.external_key AND ctp.taxon_list_id=15\nLEFT JOIN (occurrence_attribute_values oav\n    JOIN termlists_terms certainty ON certainty.id=oav.int_value\n    JOIN occurrence_attributes oa ON oa.id=oav.occurrence_attribute_id and oa.deleted=false and oa.system_function='certainty'\n  ) ON oav.occurrence_id=o.id AND oav.deleted=false\nWHERE o.deleted=false\nAND co.id IS NULL\n and o.id in (353702242)"
    }
}
JimBacon commented 1 year ago

I've raised a similar issue at https://github.com/BiologicalRecordsCentre/ABLE/issues/626. Am investigating. Hadn't realised it was linked to training, thanks. There was a warehouse update released yesterday which may be related.

JimBacon commented 1 year ago

@kazlauskis I've put a patch in place which fixes the issue I was having. Is it looking okay to you now?

JimBacon commented 1 year ago

@kazlauskis The Butterfly Count app says "Sorry, Failed to fetch" when I try to submit a 15 minute count. I'm worried in case I have broken something by making my patch.

JimBacon commented 1 year ago

Although @johnvanbreda has just made a change affecting JSONP responses to spatial services which may also be relevant.

kazlauskis commented 1 year ago

@JimBacon @BirenRathod our apps can't upload records anymore. This affects all our apps and not just the test records. I am away at the moment so I can't see exactly what's happening. Can you guys remove whatever changes that were done to the backend today? Inc. the new warehouse headers.

Linked to https://github.com/NERC-CEH/fit-count-app/issues/181

JimBacon commented 1 year ago

I will do it now.

johnvanbreda commented 1 year ago

@JimBacon the only change to spatial services was to add a content-type header set to application/javascript for JSONP requests to the spatial web services. I don't think they are used by the app at all. The changes were necessary to get round the changes to the headers on warehouse1.

BirenRathod commented 1 year ago

@kazlauskis Is it possible to send me the error, so I can revert back to that particular one instead of reverting back all headers?

BirenRathod commented 1 year ago

@JimBacon @BirenRathod our apps can't upload records anymore. This affects all our apps and not just the test records. I am away at the moment so I can't see exactly what's happening. Can you guys remove whatever changes that were done to the backend today? Inc. the new warehouse headers.

Linked to NERC-CEH/fit-count-app#181

Also, when I asked you before you mentioned the app works fine. did you not check the uploading records?

kazlauskis commented 1 year ago

I am away until later in the evening. Try various combinations and use iRecord app for a test.

JimBacon commented 1 year ago

I have reverted the two backend code changes by doing git checkout v8.23.0 and commenting out the one line of code I added.

I still see a problem. @BirenRathod, have you changed any server settings on warehouse1?

BirenRathod commented 1 year ago

@JimBacon Well, I have added headers this morning after testing with @kazlauskis. After I got confirmation from him I left the settings remain as it is.

BirenRathod commented 1 year ago

@kazlauskis I have now reverted it back. Check if it works okay.

JimBacon commented 1 year ago

Yes, it works now.

BirenRathod commented 1 year ago

@kazlauskis If apps don't work without these headers it will be a problem when we move the warehouse next Wednesday. it is mandatory to use on a new server, so need to find a solution. I can't do without looking into the error.

JimBacon commented 1 year ago

I have committed my fix to the repo and tagged it v8.23.2. Git automatically closed this issue but I have not deployed it to the live server. Travis has just reported that my build is broken. Sigh.

JimBacon commented 1 year ago

The problem with the tests on Travis was resolved and the fix has been deployed to live. This should resolve the original issue. Some discussion about this overflowed in to https://github.com/BiologicalRecordsCentre/iRecord/issues/1529

The problem with the apps all failing was unrelated and due to server configuration changes. These changes were reverted.