Looks like some part of the system is scheduling analyses for packages without providing correct version. There is 100k of such failed analyses.
ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 'dict' [SQL: 'SELECT versions.id AS versions_id, versions.package_id AS versions_package_id, versions.identifier AS versions_identifier, versions.synced2graph AS versions_synced2graph, ecosystems_1.id AS ecosystems_1_id, ecosystems_1.name AS ecosystems_1_name, ecosystems_1.url AS ecosystems_1_url, ecosystems_1.fetch_url AS ecosystems_1_fetch_url, ecosystems_1._backend AS ecosystems_1__backend, packages_1.id AS packages_1_id, packages_1.ecosystem_id AS packages_1_ecosystem_id, packages_1.name AS packages_1_name \nFROM versions LEFT OUTER JOIN packages AS packages_1 ON packages_1.id = versions.package_id LEFT OUTER JOIN ecosystems AS ecosystems_1 ON ecosystems_1.id = packages_1.ecosystem_id \nWHERE versions.package_id = %(package_id_1)s AND versions.identifier = %(identifier_1)s'] [parameters: {'identifier_1': {'latest': '0.1.25'}, 'package_id_1': 573104}] (Background on this error at: http://sqlalche.me/e/f405)
File "celery/app/trace.py", line 375, in trace_task
R = retval = fun(*args, **kwargs)
File "celery/app/trace.py", line 632, in __protected_call__
return self.run(*args, **kwargs)
File "selinon/task_envelope.py", line 169, in run
raise self.retry(max_retries=0, exc=exc)
File "celery/app/task.py", line 668, in retry
raise_with_context(exc)
File "selinon/task_envelope.py", line 114, in run
result = task.run(node_args)
File "f8a_worker/base.py", line 106, in run
raise exc
File "f8a_worker/base.py", line 81, in run
result = self.execute(node_args)
File "f8a_worker/workers/init_analysis_flow.py", line 41, in execute
v = Version.get_or_create(db, package_id=p.id, identifier=arguments['version'])
File "f8a_worker/models.py", line 71, in get_or_create
return cls._by_attrs(session, **attrs)
File "f8a_worker/models.py", line 51, in _by_attrs
return session.query(cls).filter_by(**attrs).one()
File "sqlalchemy/orm/query.py", line 2884, in one
ret = self.one_or_none()
File "sqlalchemy/orm/query.py", line 2854, in one_or_none
ret = list(self)
File "sqlalchemy/orm/query.py", line 2925, in __iter__
return self._execute_and_instances(context)
File "sqlalchemy/orm/query.py", line 2948, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "sqlalchemy/engine/base.py", line 948, in execute
return meth(self, multiparams, params)
File "sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
compiled_sql, distilled_params
File "sqlalchemy/engine/base.py", line 1200, in _execute_context
context)
File "sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
exc_info
File "sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "sqlalchemy/util/compat.py", line 186, in reraise
raise value.with_traceback(tb)
File "sqlalchemy/engine/base.py", line 1193, in _execute_context
context)
File "sqlalchemy/engine/default.py", line 507, in do_execute
cursor.execute(statement, parameters)
Looks like some part of the system is scheduling analyses for packages without providing correct version. There is 100k of such failed analyses.
Args (note the version key):
Sentry: https://errortracking.prod-preview.openshift.io/openshift_io/fabric8-analytics-production/issues/9388/