Fostering Trust Amongst Stakeholders Through the Process of Learning Measurement

2017-11-28T20:59:56+00:00

All jurisdictions want a strong and growing economy, but defining exactly what that means is complicated. Also, gaining a complete agreement across all of the community’s residents and professional organizations is unlikely. Such is the case with economic development. However, for evidence-based policy, there needs to be a set of metrics that can be measured in order to assess the extent to which government policy is achieving community goals.

The idea of evidence-based policy is easy to get behind. It is easy to accept the idea that decisions made with evidence will result in better outcomes than decisions made in other ways. However, once we dig beneath the surface a bit, it becomes clear that the practice of evidence-based policy has often been more aspirational than realized. There are numerous challenges, not the least of which is determining what is to be measured.

Academic research offers little insight to practitioners on this point. Guidance is usually very general; laying out a process based on the logic of a program and assuming that the correct metrics will be revealed once the program logic is understood. However, details tend to be scant about the actual translation of program logic into a metric; examples are provided, but the examples may offer little guidance for different contexts. This can be particularly challenging when the context itself is vague and outcomes are difficult to define, let alone operationalize into metrics.

In 2015, Fairfax County and Virginia Tech’s School of Public and International Affairs (SPIA) began working to develop a set of metrics for Fairfax County’s Strategic Plan to facilitate Economic Success (Plan). The Plan is a wide-ranging, innovative take on economic development, recognizing that a county’s economic health extends well beyond things like business growth and office utilization to include related issues like education, land use, and housing affordability, among others. However, this recognition meant that Fairfax County could not rely on traditional measures of economic development like office vacancy or unemployment—a wide-ranging Plan requires wide-ranging measures. But what needed to be measured was unclear.

We began our work by following the traditional suggestions from academic literature. We mapped out the logic of county policies to identify what the county was trying to achieve. We then met with various stakeholders, like business groups, resident association, and school representatives to get their takes on what the important goals were. Taking in all of this information, we then developed a list of outcomes that were identified by county staff and community stakeholders. The list was long, but it encompassed a robust set of interests that met the intentions of the Plan.

However, during this process it was recognized that the list was simply too long—the county would never be able to manage such an extensive measurement program. From that point forward, an iterative process of honing the list of goals to a manageable size, considering things the county was already measuring and if those measures could be used for the goals we identified, and if not, locating potential measures that could fit the needs. Stakeholders and county staff were involved to discuss potential metrics in order to receive their feedback. A set of 35 metrics that emphasized the Plan’s priorities (with the backing of stakeholders, staff, and elected representatives) was decided.

Economic development is a complex issue that brings in a wide range of stakeholders with different perspectives. Political conflict was inevitable and everyone would never embrace the selected metrics. However, the process we used acknowledged this reality and developed a measurement strategy that provided meaningful information about program performance. More importantly, the process built relationships between staff and community stakeholders, and provided a way for public employees to better understand the environment that the program is being managed within.

We learned that there is usually not one or even a few optimal metrics, and different stakeholders will prefer different metrics. We learned that performance measurement is a political process of engaging stakeholders and framing the determination of outcome metrics as an opportunity to gather different interests together for the purpose of seeking, but likely not reaching a consensus on what organizational outcomes should be measured. Given this reality, the true value of our work was in the process. It fostered trust and learning opportunities for the administrators and stakeholders involved.

In conclusion, we suggested that the outcomes that are selected for measurement matter less than the process that is used to derive them. By gathering different stakeholder groups together, and having staff engage with these groups, relationships are formed, trust is built, and metrics can achieve a level of legitimacy and support amongst these groups. By recognizing that no metric, or even set of metrics can adequately measure broad social conditions, stakeholders can recognize the implications of selecting some metrics over others, and more importantly, recognize what is not being measured in addition to what is. With the help of this process, administrators were better able to grasp what their programs were intended to achieve, forged better relationships with stakeholders, and gained a better understanding of the interests of political principals. While a set of metrics was settled on, the learning process at the organizational level fostered deeper understanding of what needs to be done policy-wise, and also fostered deeper connections across different governmental agencies and actors within the community.

In conclusion, we suggested that the outcomes that are selected for measurement matter less than the process that is used to derive them. By gathering different stakeholder groups together, and having staff engage with these groups, relationships are formed, trust is built, and metrics can achieve a level of legitimacy and support amongst these groups. By recognizing that no metric, or even set of metrics can adequately measure broad social conditions, stakeholders can recognize the implications of selecting some metrics over others, and more importantly, recognize what is not being measured in addition to what is. With the help of this process, administrators were better able to grasp what their programs were intended to achieve, forged better relationships with stakeholders, and gained a better understanding of the interests of political principals. While a set of metrics was settled on, the learning process at the organizational level fostered deeper understanding of what needs to be done policy-wise, and also fostered deeper connections across different governmental agencies and actors within the community.

“The process of measurement is important for developing understanding of what public policies are intended to do. Involving the stakeholders early on and as often as possible in the process develops trust between the government and the community.”

Author biography:

Adam Eckerd was an Assistant Professor with the Center for Public Administration and Policy at Virginia Tech’s School of Public and International Affairs (SPIA) from 2012 to 2017. He conducts research on organizational and individual decision-making, particularly as it relates to how risk is assessed and how information is used to manage public and nonprofit programs and policies.