A Tentative Step Toward Local Government Accountability in NC

  • January 29, 2024
  • 0
  • 174 Views

Marc Joffe

Performance measurement in the public sector can be challenging, and state and local governments too often avoid the task entirely. However, the absence of performance data leaves stakeholders unable to determine whether their state, city, or special district is using tax money effectively. The University of North Carolina’s (UNC) School of Government has been working with Tar Heel State municipalities to fill this gap, but its efforts illustrate the stiff challenges in objectively assessing government effectiveness.

In the private sector, organizational performance is easy to assess by looking at revenues, net earnings, and market capitalization. But government entities usually do not assess themselves or their peers based on financial metrics or market response, which suggests that most government functions could perform better in the private sector.

In the absence of a bottom line, the goals of government organizations can be nebulous. For example, “the California Arts Council is a state agency with a mission of strengthening arts, culture, and creative expression as the tools to cultivate a better California for all.” Determining whether and to what extent the Council fulfills this mission necessarily involves a high degree of subjectivity.

In other cases, there may be a clear goal, but attainment is difficult to measure. The purpose of a municipal street cleaning operation is obvious, but rigorously measuring street cleanliness is another matter.

Even when readily definable and measurable service quality indicators can be collected, governments may not do so consistently. One common measure of service quality is 911 response time—how much time elapses between a call to the government’s emergency phone number and the arrival of first responders at the scene. Although straightforward, this metric is far from universally available.

As the website SafeSmartLiving found, many large cities do not report their emergency response times on a regular basis, and Chicago doesn’t report them at all. Further, some cities provide an average across all 911 calls, while some only report the average for “Priority 1” calls, and even the definition of “Priority 1” varies between cities.

Third parties may be able to calculate average response times if agencies log all their “Calls for Service” on an open data portal. But as data analyst Jeff Asher discovered, there were only “15 US law enforcement agencies covering nearly 5 percent of the US population that publish Calls for Service data with enough information to calculate response times in their open data portals.”

Finally, governments all too often report outputs instead of outcomes. Simply knowing how much effort government employees made is a poor substitute for knowing whether these efforts improved the quality of life. For example, instead of reporting their Pavement Condition Index (PCI)—a standard measure of road quality—some simply publish the number of lane miles repaved over a given period.

North Carolina Local Benchmarking Initiative

Scholars at UNC have been trying to solve the challenge of government performance measurement since 1995 when the university’s School of Government agreed with two government associations to sponsor a benchmarking initiative.

The initiative collects hundreds of metrics across eleven categories. Each metric is rigorously defined. For example, the Police Service Average response time for high‐​priority calls metric is the “[a]verage time in minutes elapsed from when a high priority call for service is received by the police department from the dispatcher or 911 center (dispatch received) to when a police unit arrives at the scene of the incident (arrival on scene).” However, departments “may use their own definition of high‐​priority.”

While the benchmarking project’s data are quite detailed and address both outputs and outcomes, they nonetheless have shortcomings. Among the 552 municipalities in North Carolina, only fourteen participated in the 2023 benchmarking effort.

And, while external stakeholders can view graphs portraying some of the results on a project website, there is no way to download all the measures. Cities pay to participate in the benchmarking initiative, and project managers are reluctant to alienate them by providing too much transparency.

So, while UNC’s project is a step in the right direction, it has yet to provide widespread accountability to North Carolina municipalities. That could best be achieved by state or federally‐​mandated and funded performance data reporting. But while a top‐​down solution might get the job done, it will also raise questions about local control and costs.

A better way to achieve accountability is to migrate public services away from monopoly government provision and toward free market competition. While constituents can only vote based on the imperfect information governments provide them, customers can vote with their wallets. Service providers who fail to produce quality outcomes at reasonable costs face the prospect of failure and liquidation, which is true accountability.