In February, thousands of aspiring lawyers took an online bar exam in the privacy of their homes. Thirty-six jurisdictions offered the remotely proctored exam using artificial intelligence (AI) facial recognition and biometric monitoring software.
The bar applicants were digitally surveilled to detect cheating, yet the proctoring algorithms remain shrouded in secrecy. The information asymmetry is more than ironic; it exposes troubling blind spots in the booming online proctoring and AI surveillance industries.
The design and development of AI systems are virtually unregulated. The government regulates vaccines, airplanes, and home loans to mitigate risk and build public trust in these and countless other commercial offerings. Reasonable minds may disagree about the means and ends of regulation, but without metrics and standards around AI surveillance systems, we use this technology at our peril.
It is one thing when an AI system mistakenly flags an email as spam or recommends a bad movie. But in high-stakes contexts, like the bar exam, algorithmic mishaps can have damning consequences. Bar takers flagged as potential cheaters can have their records marred by disciplinary investigations, or worse, be precluded from practicing law.
Questions About Fairness and Accuracy
The National Conference of Bar Examiners, which creates and licenses the exam questions, has a legitimate interest in securing its proprietary questions. Like other standardized tests, the bar exam reuses certain questions to anchor the exam and ensure its long-term reliability. When Covid‑19 forced examiners to consider alternatives to the traditional in-person bar exam, most jurisdictions turned to online testing with AI proctoring as a solution to the security concerns.
But this solution triggers a different set of concerns about the fairness, efficacy, and safety of the AI proctoring system. The science behind using AI systems to make predictions about cheating is highly dubious, especially when deployed on disabled individuals with atypical behaviors.
Moreover, some online bar applicants of color reported being shut out of the exam because the facial recognition software did not recognize them. A debasing workaround for some applicants of color was to shine a bright light on their faces throughout the two-day exam.
The software provider for the online bar exam, and states that have utilized it, have argued that the AI proctoring system is trustworthy because, ultimately, humans review the computer-generated results before sanctions are imposed. Still, whether these humans mitigate the risk of harm depends on how harm is being defined and measured, and from whose perspective.
In California, for example, more than 3,000 of the 9,000 individuals who took the online bar exam last fall were flagged and reported to state licensing officials. Of those, less than 1% received any type of sanction.
One thing is clear: the AI proctoring system was designed to cast a wide net to compensate for its lack of precision. Someone decided that it was exponentially more important to flag potential cheaters than to let one slip away.
Standards Are Needed
We must be clear-eyed about the motivations, means, and ends to improve the accuracy and efficacy of AI proctoring. Improving system performance requires huge amounts of fresh biometric data to retrain the algorithms. The most valuable data will come from test takers.
Wittingly or not, the biometric data of online bar takers might be used to improve the commercial value of the surveillance algorithms deployed against them. These aspiring lawyers can only hope that their sensitive personal information remains secure. The recent SolarWinds and Verkada hacks are sobering reminders that cybersecurity is an aspiration, not a guarantee, for governments and businesses alike.
In December 2020, a group of U.S. Senators raised concerns about privacy and bias in a letter to the software company servicing the online bar exam. The company’s response, which denies any wrongdoing, raises more questions than it answers.
For example, the company states that its proctoring system was audited. But we do not know who conducted the audits (internally or externally), the tests or metrics used for bias and accuracy, the vulnerabilities in the system, and the provenance of the training data.
Without regulatory intervention, the incentive structures will continue to favor corporate secrecy and a move fast-and-break things mentality. That should be of concern to bar applicants and to bar examiners who may be called to account if things break.
An Urgent Matter
Eighteen jurisdictions have already announced that they will administer a remote online bar exam in July 2021, and that number is projected to increase. The NCBE recently announced that its next generation bar exam will be offered exclusively online, effectively ending the tradition of the paper-and-pencil exam.
As online proctoring is poised to become a post-pandemic mainstay, due regard must be paid to the cross-cutting interests of a range of stakeholders. To that end, the Association of Academic Support Educators has urged states to adopt best practices for online bar exam administration, which prescribe, among other things, increased transparency around the use of AI proctoring systems.
If bar applicants are harmed from the online proctoring system, how will we know, and how will responsibility and liability be allocated? The bar applicants surveilled last month may soon be among the lawyers helping to answer these questions.
This column does not necessarily reflect the opinion of The Bureau of National Affairs, Inc. or its owners.
David Rubenstein is the James R. Ahrens Chair in Constitutional Law and director of the Robert Dole Center for Law and Government at Washburn University School of Law. His scholarship focuses on constitutional law, administrative law, and artificial intelligence.
Marsha Griggs is an associate professor of law and director of academic enhancement and bar readiness at Washburn University School of Law. She is a member of the Collaboratory on Legal Education and Licensing for Practice, and her scholarly focus is on professional regulation and licensing.