Giving Thoughts

May
28
2014

If You Can’t Trust The Data, Measurement Is Meaningless

By Daniel Shapiro, Director of Data Analytics and Management, and Scott Cody, Vice President of Human Services Research, Mathematica Policy Research

In its quest to use private dollars for the public interest, the philanthropic sector struggles to find the best ways to measure progress and promote accountability. There are many interesting articles and blog entries, including Gary Wexler’s recent post on this blog, “Could the Incessant Demand for Data Kill Innovation in the Nonprofit Sector?”, Matteo Tonello and Alex Parkinson’s response, “The Incessant Demand for Data Is No Bad Thing,” and Wendy Ramage Hawkins’s piece, “Helping Nonprofits Use Data Effectively Is Fundamental To Our Support of Partners.” Rather than jump into the unfinished debate of what should and shouldn’t be measured in philanthropy, we want to focus on a critical component of measurement that’s too often absent from these discussions: the quality of the data being used.

Collecting basic performance statistics requires minimal effort. Data related to fundamental performance metrics (such as the number of participants enrolled, how many staff have been trained, the number of families served, and so on) should be collected for program management purposes anyway. This process does not require the collection of billions of bits of data, should not be off-putting, and remains essential for knowing whether a program is meeting basic goals, responsibly dispersing resources, and functioning as expected.

With the digital revolution, we can collect even more data with less effort. Affordable web-based case management systems—including some that focus on nonprofit social service providers—are widely available. These systems, which streamline the process of collecting basic performance measures, can vastly expand the number and accuracy of measures collected. Properly configured, these systems make it a relatively straightforward exercise to look at program performance by site, participants’ characteristics, the type of intervention implemented, or other dimensions of interest.

Data collection requires care

However, just because it is easy to collect data does not mean the data collected are high quality. The truism “garbage in, garbage out” succinctly captures why programs need to take care to collect accurate and consistent data. While no responsible organization would ignore the fundamental principles of financial management, many organizations pay scant attention to managing the quality of their data and thus risk squandering real value.

As the amount of available data expands, programs can use the information for more than just tracking day-to-day performance. The ability to merge and match information from a case management application with external administrative data sets, or even with information from social media sites, creates a wealth of possibilities that were unimagined just a few years ago. With the right data and the right tools, programs can readily assess what is and isn’t working, which can guide strategic planning and foster innovation. But using data for strategy and innovation increases the stakes tied to data quality. No program wants a “garbage out” strategy.

Tips for ensuring high-quality data

So how do organizations make sure the data used to manage a program are of high quality? Fortunately, it’s not difficult to put processes in place that can help ensure the quality, and thus the value, of data. Organizations can easily move from an undisciplined, reactive approach to data quality to a more governed position by implementing some common-sense procedures:

1. Identify the program’s critical data elements These are essentially the information needed to manage the program and track its activity. Start by asking the questions you want to answer and then work backwards to ensure you’ll have the information to provide the answers.
2. Determine the criteria for assessing the quality of those critical elements Identify ways to validate that your information is consistent and comprehensive. Basic elements to monitor are the degree of missing or out of bounds values, poorly formatted data elements (e.g., an email address with no “@”), and unexpected duplicate records.
3. Assess the quality of your critical data After you establish the critical elements and the assessment criteria, it’s a straightforward process to profile your data so that problematic issues can be identified. The majority of problems can be uncovered by examining simple frequency distributions. Many case management systems can produce reports of this type on demand.
4. Investigate and remediate identified problems This is important to ensure that the information being used for program planning is as useful as possible. Often, data quality issues are associated with service delivery problems, such as incorrect implementation of case intake at a specific site or service providers not adhering to proper protocols.

These data-quality processes are not costly to implement, yet they go a long way to ensuring the information used for program management, assessment, and strategic planning is reliable. They give organizations confidence that their decisions are based on evidence that’s accurate and supportable so they can minimize risk and doubt and maximize program impact.

About the authors:

Daniel Shapiro

Daniel Shapiro
Director, Data Analytics and Management
Mathematica Policy Research

Daniel Shapiro is director of Mathematica’s Data Analytics and Management department in the Surveys and Information Services Division. He has expertise in performance measurement, data warehousing, data specifications, and system testing.

 

 

 

 

 

Scott Cody

Scott Cody
Vice President and Director of Human Services Research
Mathematica Policy Research

Scott Cody is a vice president and director of human services research in Mathematica’s Cambridge, Massachusetts, office. He is a national expert in the intake and eligibility determination procedures used by state Supplemental Nutrition Assistance programs. He also is a deputy director of the U.S. Department of Education’s What Works Clearinghouse—a systematic review of rigorous education research.

 

 



You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.

Comments are closed.

Subscribe to Giving Thoughts

Giving Thoughts series

The Conference Board Giving Thoughts series is a monthly online publication in which corporate philanthropy experts delve into the most pressing issues affecting our members.

Download the latest issues:
A Beginner's Guide to Measuring Social Value

Defining Impact

A Standard for Social Impact Reporting

Data Collection and Analysis in Philanthropy

Classifying Social Impact Measurement Frameworks

Deconstructing Impact Investing
Back to Top