In the maintenance scripts we rely on the owner of the plone site to be a system. It seems that we have many deployments for which the plone site owner is admin, a user that does not exist, and the maintenance scripts therefore get executed as anonymous with obvious consequences, i.e. they fail as not all objects can be accessed.
As to why certain of our sites are owned by the admin user, here is what I found out:
Before creating the plone site, the standard plone admin; admin user is available as system user, and you can login with that user to setup the plone site. I think the acl-users then gets replaced during installation and that user is not available anymore.
I could be that some older deployments are pre-ftw.zopemaster and that things worked differently at that time?
Correct way of installing a new deployment would be to zauth on the plone root before installing the plone site.
In the maintenance scripts we rely on the owner of the plone site to be a system. It seems that we have many deployments for which the plone site owner is
admin
, a user that does not exist, and the maintenance scripts therefore get executed as anonymous with obvious consequences, i.e. they fail as not all objects can be accessed.The proposed fix comes from how this is done for
bin/instance run
scripts (https://github.com/plone/plone.recipe.zope2instance/blob/master/src/plone/recipe/zope2instance/ctl.py#L691-L694).As to why certain of our sites are owned by the
admin
user, here is what I found out:admin; admin
user is available as system user, and you can login with that user to setup the plone site. I think theacl-users
then gets replaced during installation and that user is not available anymore.ftw.zopemaster
and that things worked differently at that time?For https://4teamwork.atlassian.net/browse/GEVER-192