The pandemic forced courts to embrace new technologies as social-distancing requirements prevented access to courthouses, and processes from filing paperwork to holding hearings moved online. But March 2020 was not the first time that courts considered technology adoption. For example, many had already started digitizing their processes for resolving civil disputes.
The concept of online dispute resolution (ODR) started in the 1990s to allow parties engaged in e-commerce to resolve disputes related to buying and selling goods on the internet. In more recent years, technology companies began developing ODR tools that could be adopted by courts to make it easier for lawsuits to be resolved outside of the courtroom.
State and local courts have increasingly looked to ODR platforms as a way to improve the way they do business and expand access to the judicial process. But court leaders need to be able to determine if the virtual processes are working as intended and meeting the needs of both litigants and the courts. Program evaluation offers a mechanism to do that.
In the courts, ODR works much as it does in e-commerce. Parties to a lawsuit can negotiate on a range of civil issues among themselves online. They can involve a mediator, message one another, or share documents on their own time, outside of court business hours—and without having to physically come to court.
According to the American Bar Association’s Center for Innovation, ODR use grew from its start in 2013 to 66 local court locations across 12 states by the end of 2019; nearly two-thirds of that growth occurred between 2018 and 2019. Initially used to resolve traffic disputes, ODR was being used by 2019 for 14 case types, such as traffic, small claims, and debt collection. Still, courts and policymakers lack the evidence to firmly establish ODR’s efficacy—a knowledge gap that can be addressed through program evaluation.
For example, court leaders want to know whether resolving a dispute through ODR works better than the traditional court process, whereby the parties tell their respective stories in person to a neutral third party (typically a judge), who then makes a decision based on the facts presented and the law. From a practical standpoint, does ODR make their systems more efficient, and is the technology cost-effective relative to the investment of time and resources required for its implementation? From a technological standpoint, can court leaders ensure that the platforms are user-friendly and accessible for litigants, particularly those without lawyers? Researchers have developed strategies that can help answer these questions.
Program evaluation allows for the impact and effects of a program to be measured against its goals. The results then help with subsequent decision-making about the initiative and efforts to improve its functioning.
Designing a program evaluation project starts with identifying the questions that relevant stakeholders want answered, the program-related data required to answer those questions, and what approach, or scientific methodology, will be most appropriate to gather and analyze the data. The data falls into four broad categories: quantitative (e.g., the number and demographic characteristics of people served, time in the program, etc.); qualitative (i.e., information gathered through surveys, interviews, or focus groups with various program stakeholders); observation-based (i.e., notes made while observing the program in action); and cost (i.e., information about the spending and cost savings resulting from program implementation). This data can be analyzed independently or in combination, using various methodological approaches.
Three methodologies hold promise for helping courts understand whether ODR technology platforms are achieving their intended goals. The first is called a randomized control trial—a technique widely used in medical science whereby the effects of an intervention are tested by exposing one randomly selected group to the intervention but not a separate control group. In the ODR context, litigants could be randomly assigned to participate—or not—in using the online process to determine its effects.
Observation-based usability testing, which would allow courts to determine whether their ODR platforms are accessible and user-friendly for litigants, provides another alternative. Finally, procedural justice surveys, which gauge litigants’ perceptions of the fairness of court procedures, could help uncover differences in these experiences when using ODR versus the traditional court process.
Each of these approaches relies on different sets of data, involves different stakeholders, and requires various time frames and resources. Courts can borrow elements of each as they determine what they want to learn about ODR, how much time and effort they want to invest in studying it, and how they plan to respond to their findings. These and other tools can help courts determine whether ODR is achieving the goals that leaders set at the start, including whether the program is transparent, efficient, and equitable.
The story of ODR and the questions it raises are not dissimilar to the various tech tools that courts have embraced in response to the COVID-19 pandemic. Taking away from their experiences with ODR, court leaders should work with researchers and other stakeholders to create and adopt a national framework for evaluating technology tools that could help courts with limited resources learn common lessons from evaluations in other jurisdictions. Such an approach would encourage courts to employ scientifically sound methods for internal assessments of their technology platforms. Moreover, as these tools expand and evolve, having a playbook for evaluation would enable leaders to continually monitor the progress of their modernization projects, regardless of technology changes.
Program evaluation is not a one-size-fits-all concept. A broad framework, however, would provide a menu of options to build an evidence base around the growing use of technology in the nation’s courts.
Erika Rickard is a director and Qudsiya Naqui is an officer with The Pew Charitable Trusts’ civil legal system modernization initiative.