canada-ca / welcome

Start here! readme for our organization / Commencez ici! readme pour notre organisation
Other
70 stars 11 forks source link

Evaluating and measuring engagement processes #6

Closed kentdaitken closed 4 years ago

kentdaitken commented 7 years ago

If this exists elsewhere in the Github ecosystem let me know and I'll just comment there. Section 3 is long and academic but I figured it'd be good for reference, but feel free to skip and weigh in on 4.

Contents

  1. Problem
  2. Assumptions and premises
  3. Research
  4. Resources
  5. Possibilities

1. Problem

There's lots of guidance for how to evaluate public participation (P2) exercises (including much developed by GC departments), but there's nothing standardized. For good reason, as each P2 process has different goals, stakeholders, and definitions of success, so guidance tends towards this format: "Identify your goals in advance (here are some suggestions), and have a data gathering plan to ascertain whether you've met them (here are some suggestions, e.g., surveys, interviews with a sub-section of the participants, etc.)."

Downside is that it makes it very, very difficult to compare different exercises, methods, or different organizations' capacity for P2, and therefore difficult to systematize learning and improvement.

Any evaluation approach for the GC will have to map to the Principles (see: https://github.com/canada-ca/welcome/wiki/Draft-Guiding-Principles-for-Consultations-and-Public-Engagement), which are not yet set in stone, but I think we can get some discussion going on how we might bridge that gap between defining success in context for each P2 process but assessing against common markers for comparability and quality.

2. Assumptions and premises (i.e,. @kentdaitken's, writing this)

3. Research

References can be looked up here.

Criteria for success (Rowe and Frewer, 2000):(http://www.en.ipea.gov.br/participacao/images/pdfs/participacao/2000%20public%20participation%20methods.pdf):

Acceptance criteria:

Process criteria:

Goals of P2, Shepard (1997):

Literature review (copying from my academic work, so apologies for the, uh, writing):

This last point about incorporating local citizens’ understanding of the area is a recurring element in P2 literature. However, agreement with this principle is not universal; many have concluded or at least considered that complex policies such as environmental and hazard mitigation issues should be the domain of those with specialized expertise. P2 has been shown to push the U.S. Environmental Protection Agency towards preventative decisions (i.e., to make decisions that forego economic benefits disproportionate to assessed risks) (Daley 2007), and citizens tend to assess risks as higher than experts do (Green 1997, Renn 1992). Godshalk, Burby, and Brody (2003, 735) studied the difficulty of explaining subjects with “high complexity and risk” to public audiences. Green (1997, 435) highlights that “the addition of ... public participation rights risks overregulation when public demand for control is high without reducing the possibility of underregulation when public interest is low.”

On the other hand, citing Innes (1990, 1998), Lindblom and Cohen (1979), and Schon (1983), Burby (2003, 34) concludes that “citizens possess ‘ordinary knowledge’” that helps planners understand local conditions. Orosz (2002) and Jonsson (2005) come to a similar conclusion, as does Shepard (1997), citing Susskind and Cruikshank (1987) and Portney (1991) while grounded in the more realist view that the alternative can be appeals, lawsuits, and accordingly, project delays.

It is important to note that these considerations of the benefits of P2 are typically applied across all P2, regardless of political context, jurisdiction, or the P2 format selected. With the exception of Colbourne (2009) (http://www.sciencewise-erc.org.uk/cms/assets/Uploads/Public-Futures15-05-14.pdf), there seems to be comparatively little effort applied to developing a framework for different kinds of public policy issues and whether various P2 activities apply asymmetrically. At most, P2 activities are categorized as (1) technical and/or complex issues and (2) all others. This is particularly true in that the preponderance of P2 case studies focuses on local examples rather than regional or federal ones.

ƒ“Type A situations are characterised by low controversy and/or few alternative options due to constraints of time, procedure and resources, or by the existence of a crisis (and need to act immediately).

Type B situations are characterised by a greater number of options, increased uncertainty around the ‘right’ decision and/or the need to make trade-offs and compromises.

Type C situations are characterised by the need to make a decision that will affect many stakeholders (individuals, communities and/or organisations) in a situation with much complexity or uncertainty, a range of (often entrenched) views on the ‘right’ decision and a strong likelihood of conflict and resistance.”

Authors such as Harvey (2009) and Martineau-Delisle and Nadeau (2010)(http://www.cfs.nrcan.gc.ca/pubwarehouse/pdfs/31978.pdf) point to a lack of emphasis in the literature on the participants’ goals and motivations, and what constitutes a success in the eyes of participants. Many authors refer to participants’ goals in realist terms, insofar as they confer legitimacy or efficacy on the goals of those hosting P2 processes; in some cases, the authors also cite Downs’ (1957) rational choice theory about citizen behaviour, to explain why citizens may opt out of participation opportunities. Likewise, Cosmo Howard (Lindquist 2010, p. 15) critiques “citizen-centric” models of evaluation insofar as they can “focus on services that governments had chosen to deliver, the manner in which they had chosen to deliver them, and using criteria and a measurement tool that, while informed by citizen surveys, might not reflect the questions and issues citizens might raise in the context of particular services.”

Rowe and Frewer have covered the evaluation element of P2 research and were setting the research agenda in 2004 for what they believed was an immature field (Rowe and Frewer, 2004). Practitioners have since developed general guides to evaluating the success of P2 (Involve 2011; World Bank 2016), but the difficulty in establishing what constitutes “good policy” given the range of citizens’ views and preferences, and in understanding the extent to which P2 influences policy decisions, makes standardization and comparability elusive.

Lastly, it is also notable that few authors characterize success in terms of economic value or a cost/benefit analysis, but rather, in terms of decontextualized success towards achieving many of the above goals (e.g., public legitimacy, the avoidance of legal challenges, or the incorporation of stakeholder ideas and suggestions) - meaning that defining and comparing “success” in a meta-analysis is difficult.

A couple bonus snippets about participatory budgeting research as it pertains to the P2 goal of "better public policy":

Torgler and Schneider (2009) found that lower tax delinquency rates occur in jurisdictions where citizens have trust in government institutions (tax delinquency being the non-payment of taxes, meaning governments have to spend money enforcing tax law rather than enjoying compliance - this is in contrast to taxpayers enacting strategies for legal tax evasion). Moreover, direct participation—particularly in budgetary decisions—can result in significant increases in tax revenues (Scheider and Baquero 2006).

More strikingly, Scheider and Goldfrank (2002) found that the introduction of participatory budgeting led to more accurate forecasting of expenditures versus budgets as well as higher project completion rates. Other authors found evidence that participatory budgeting increases investment levels or the quality of financial management practices (Zamboni 2007).

4. Resources:

Some of these sources are getting old. There are a couple recent books on the topic that cost $200+ on ebook - will try to get access from one of the universities and distill them.

Evaluating Public Participation Exercises: A Research Agenda: ftp://143.106.76.79/pub/CT001%20SocCiencia/Agosto%2016/Rowe%20and%20Frewer%202004.pdf

Public Participation Methods: A Framework for Evaluation: http://www.en.ipea.gov.br/participacao/images/pdfs/participacao/2000%20public%20participation%20methods.pdf

Assessing the effects of public participation processes from the point of view of participants: significance, achievements, and challenges: http://www.cfs.nrcan.gc.ca/pubwarehouse/pdfs/31978.pdf

World Bank's Evaluating Digital Citizen Engagement A PRACTICAL GUIDE (I find this fits the model of "define your goals and have a data collection plan," and it actually reinforces the "this stuff is complex" conclusion for me): https://openknowledge.worldbank.org/bitstream/handle/10986/23752/deef-book.pdf?sequence=1&isAllowed=y

5. Possibilities

The two most common methods seem to be surveying or interviewing participants after the P2 exercise (this could also be done twice: once right after the interaction and once again after engagement analysis is disseminated or the policy decision is taken). The GC could set standards (with exceptions) for what goes out when, and create a modular template that contains both the process-specific evaluation for program needs and a set of standardized, GC-wide questions that address the principles. All of which would likely be combined with a narrative evaluation from the POV of the convenor. For reference, the principles:

As a side note, before processes, would-be convenors could work through a capacity assessment which would dictate the extent to which they have to engage departmental or central CoEs for guidance (the GC's Project Complexity and Risk Assessment Tool is actually a great example: https://www.tbs-sct.gc.ca/hgw-cgf/oversight-surveillance/itpm-itgp/pm-gp/doc/pcrag-ecrpg/pcrag-ecrpgpr-eng.asp).

I'd tend be an advocate for having a sub-section of the external participants "own" part of the evaluation and get engaged during the design stage, and write their own assessment against principles (with qualitative assessments as well for comparability). Possibly supplemented by surveys and interviews, that would appear either in What We Heard reports or online with the GC self-evaluation on canada.ca/consultingcanadians (https://github.com/canada-ca/welcome/issues/4).

And obviously any of these approaches would need to include data about the context and the methods and channels used for both outreach/promotion and engagement.

Other research, resources, people/orgs that are dominating this concept, possibilities?

RMarland commented 7 years ago

Thanks Kent for sharing this and the various resources - especially appreciate the categorisation of public policy issues/P2 "situations" and the reminder of the TBS risk assessment tool in the planning context. In terms of the evaluation methods - with participants involved in design before and potentially two points of interaction after - think we also need to consider the monitoring/ evaluation during which allows for adjustment of approach - especially where we are talking about iterative engagement over time.

erninghan commented 7 years ago

I would like to see this matter from a slightly different angle: operational needs

Problem • GoC does not have a strong capacity of timely and comprehensively capturing key activities throughout engagement process – which is the major barrier to effective evaluation. • In addition, without such a capacity, effective directing/coordinating engagement activities would be far from optimal at central-agency/cabinet level, as well as at portfolio/departmental level.

Assumption • Without central agencies’ leadership, timely and effectively sharing information on engagement activities across departments is difficult, if not impossible, given the entrenched “silo” culture in public service (despite of some improvements under BP2020).

Required Action • Establishing a solid information system (supported by policy requirements and defined operational mechanisms) to capture key activities throughout engagement process (including planning) across federal departments.

Objective Building a solid information capacity in order to • Inform evidence-based decision making, with regards to improving GoC engagement activities and promoting good practice through systematic information gathering. • Support timely information sharing and effective coordination at central agency and portfolio/departmental level (e.g. system info can be accessed live by departments and central agencies to support their perspective activities).

Approach • Build on PCO annual planning practice, to develop (gradually from short-term simple/feasible tools to a long-term multiple featured) mature info/data system. • Identify existing good info systems and/or features at departmental level for adoption • Link to CWC platform (internal operation component). • Link to other existing platform/mechanism for consultations (e.g. Canada Gazette publications) • Establish operation mechanism and policy guide for using the system. • Enforce relevant existing policy requirements as a starting point/basis for evaluation (i.e. the comms policy requirement: all external consultations must be posted on CWC), through more detailed guidance (e.g. interpretation of “external consultations” – see Consideration below). • Engage “early adopter” departments to ensure quality of the system and smooth operation.

Consideration • As the scope of the engagement activities would be large and diverse, starting with “key activities” would be a practical approach – these “key activities” could be defined through clear categorisation. According to my past analysis, key activities could be defined in terms of addressing issues in the following categories: o Major policy initiatives (including legislations) at cabinet level, which subject to parliament debate/approval. o Legal implementation of legislation (i.e. developing regulations) at portfolio/departmental level. o Administrative interpretation (e.g. standards, procedures, guidance) for implementation of legislation/regulation at portfolio/departmental level. o Significant issues resolution (e.g. high public concern and/or political sensitivities) at cabinet and portfolio/departmental level. • Scope of information gathering should ideally include the whole engagement process (from planning to conducting to reporting to evaluating). However, to be practical, we could start from most available information on certain stage (i.e. conducting – CWC and, planning – PCO annual planning). • The info system may also need to consider linkage to and compatibility with other info/operational systems, such as GCdoc.

Addressing other subject matters (e.g. developing minimum standards, improving consultation platform features etc.) should also bear in mind of how to strengthen this info system, which in turn would provide solid evidence on how to improve engagement practice.

erninghan commented 7 years ago

Thoughts on some possible basic statistics from potential data sets derived from an info system or from multiple sources, that can contribute to performance measure (in line with the open dialogue principles) for discussion:

Overview - Overall statistics on: o Number of consultations: total and by department o Nature (category*)of consultations: total and by department

Effective outreach • Outreach method o Overall # and % for: e-mail, social media, web announcement (employed to promote consultation event) • Stakeholders o Number of stakeholder groups involved: total and by department o Nature (category*)of stakeholder groups involved: total and by department • Regional-specific
o Number of consultation with regional activities

Informed participation • Accessibility o Number/proportion of consultations accessible via (1) Consulting with Canadians, (2) Canada Gazette, (3) departmental web site accessible to public, (4) third party platform limited only to invited participants or (4) combination of two or more • “What We Heard” report o Number/proportion of consultations that have “what we heard” report

Meaningful engagement • Participation rate o Fluid Survey response rate o Number of participants for in-person consultation events • Participation time-window o Online consultation comment period • Participant feedback o Citation from available feedback • Media report o Citation from available media report on consultation practice

Enhanced capacity • Planning o Number of consultation plans available • Post-event evaluation o Number of internal “Lessons-learnt” report available • Governance and coordination o Number/proportion of consultations that are interdepartmental o Number and type of data systems being used to support consultations

erninghan commented 7 years ago

Just noticed this PPX Learning Event (January 24, 2017) - the following presentation on TBS project (centralizing reporting via SAP) would be particularly relevant - could information on consultation projects be timely captured at planning/budgeting stage? what key information items should be included? how any existing consultation platforms be linked and/or integrated with this project to enable better evaluation and coordination?

http://ppx.ca/en/learning-events/

**_Presentation #1: Implementing Centralized Reporting at TBS and Exploring Shared Integrated Business Planning

Jonathan Andrews, Technical Director for Financial Management Transformation Ghislain Cardinal, Stream Lead for Financial Planning, Budgeting and Forecasting for TBS GCFM project

In 2014, the Treasury Board of Canada (TBS) began a multi-year program to implement a centralized reporting infrastructure using the SAP Business Warehouse application and the SAP BusinessObjects Business Intelligence platform. Following SAPs analytics road map for simplification and best practices, TBS developed a robust solution that provides improved data access to thousands of public sector workers. Now TBS has begun expanding on this strong base to move forward with integrated business planning._**

kentdaitken commented 7 years ago

Great stuff, Erning.

Also, from @Studio1402:

A White Paper on Challenges and Advancements in Evaluating Public Participation: https://drive.google.com/file/d/0B-X3U0XoYSsfTTJlYkJXQ25LaDQ/view

Annex B is interesting: basically metadata to help people assess which other consultations would make good comparators in the first place.

I think we'll (using the term loosely) re-up this discussion soon and figure out concrete steps.

RMarland commented 7 years ago

These are all great resources and questions from various angles on the measurement question (from concepts to operations and feasibility, learning and improvement over time etc). I thought it would be worth adding here some underlying assumptions our team is making (up for discussion!), as we look at measuring engagement in the cross-GC context.

Our starting point has been that meaningful public engagement ...

Leading to such things as….

In this context, we want to a) know whether the goals of engaging the public were achieved, as well as unintended effects or results, as well as b) learn as an organization – build capacity and skills to engage, (what works and what doesn’t, and when), and c) know whether/how engagement contributes to priorities over time (impact). Assessing whether we are making progress towards more open government is also a key driver.

Some of the challenges we are seeing/hearing/reading about (with thanks to Kent and Erning on this thread, and others elsewhere) are around….

With such challenges in mind, we still think is possible to come up with measures, drawing on guiding principles for public engagement for “observable standards" (which relies on an assumption that our principles are on the right track...) and assisted by the literature in this regard including Rowe and Frewer and others cited along this thread. Some of the assumptions that we are working from include…

As we improve consultations data capture (as well as things like PCO tracking/planning mentioned by Erning), ideas include a measurement framework or "menu" based on principles and working with departments to implement "core" questions in consultations (considering Kent's modular template idea above). The aggregates that Erning cites are great pointers in this context. As we consider the types of things we can observe and measure, in the nearer versus longer term, we are also asking: what is readily available/collected - especially data that speaks to impacts/outcomes, if we want to build an "evidence base"?

There is also likely a gap in available/discoverable resources/case studies at the federal level, as promising practices within and outside the GC emerge - such as including participants in evaluation/measurement design as in P2 design as a best practice; mixed methods; points of measurement/open reporting during engagement processes as a tool for learning and adjusting; mixed methods; unintended and intended outcomes; comparative analysis (looking at the tool highlighted by IAP2 that Kent points to above.... ).

Long story short: the thread and possibilities above bear closer examination! Especially, I would say, with respect to the data sources and levers currently available, the opportunity that comes from a focus on a GC culture of measurement and impact, and considering challenges as we look to go beyond consultations to deeper engagement.

erninghan commented 7 years ago

The recent OECD document “Chapter 5 Citizen Participation: Doing it right” https://gcconnex.gc.ca/file/view/24327679/oecd-2016-citizen-participation-doing-it-right (shared on GCconnex) could be of value to this discussion thread. Although not all discussed aspects would fit to our operational context, certainly some could be taken into consideration in designing a performance framework.

For example, what are the outcomes we are aiming to achieve through public engagement/consultation? I think the “two clusters of benefits" elaborated in the OECD document could be considered as our two “ultimate outcomes” from public engagement/consultation activities:

(1) Better results for improved policies (innovative, more effective & cost-effective) and, (2) Better process for enhanced legitimacy (trust & social cohesion)

I think all elements that Rebecca outlined for “meaningful engagement” can be clustered into the two “clusters of benefits” (outcomes), which can be then translated downstream at output and activity level with carefully designed indicators in a logic model.

As for data collection, we may need to move gradually from traditional “call letter”/“questionnaires” for data/info (which are likely to be perceived as “burden” by info providers) to an internal “big data” approach through integration of data/info (including via various monitoring tools) as part of the whole GoC Enterprise approach to IM.

The following is an excerpt from the OECD document regarding the two clusters of “benefits” (bold are mine):

If_ executed well, citizen participation in policy making and service delivery can be a sound investment for all stakeholders. OECD research such as the Open Government Review of Indonesia shows that citizen participation can bear a great number of potential benefits for governments, citizens and other stakeholders. These benefits can be divided in two clusters (OECD, forthcoming a; OECD, 2015b; Corella, 2011): 1. Instrumental benefits (i.e. better results): Refers to the idea that participation can improve the quality of policies, laws and services, as they were elaborated, implemented and evaluated based on better evidence and on a more informed choice. They may also benefit from the innovative ideas of citizens and be more cost-effective. 2. Intrinsic benefits (i.e. a better and more democratic policy making process): Refers to the improvement and democratisation of the process, which becomes more transparent, inclusive, legitimate and accountable. A better process can contribute to strengthening representative democracy, building trust in government and creating social _cohesion.”

erninghan commented 7 years ago

Also for consideration: depending on the nature and content of a subject matter and its associated public/political environment as well as relative internal policy capacity, the expectation, focal areas and required efforts in reaching the two primary outcomes would be different at outset of each consultation.

kentdaitken commented 7 years ago

Also worth reading:

On the Evaluation of Democratic Innovations: http://paperroom.ipsa.org/papers/paper_150.pdf. Name is misleading; it's straight P2.

Considers different approaches of P2 (e.g,. if you're consulting broadly online, you're unlikely to meet a goal of "improving civic capacity of participants") and extends the literature review a little.

ThomKearney commented 7 years ago

Fantastic thread. You guys are awesome.

laurawesley commented 7 years ago

I agree with @ThomKearney You certainly keep me really busy! ;)

MaryBethBaker commented 4 years ago

There's a whole page of Public Engagement evaluation frameworks and historical what work was done on GCPedia for folks in the GC. Archiving this thread.