Washington state could become a national leader in regulating the technologies of the future, thanks in part to a bill up for debate that would establish new guardrails on government use of artificial intelligence.
On the heels of Washington’s landmark facial recognition bill enacted last year, state lawmakers and civil rights advocates are demanding new rules that ban discrimination from automated decision-making by public agencies. The bill would establish new regulations for government departments that use “automated decisions systems,” a category that includes any algorithm that analyzes data to make or support government decisions.
The legislation would establish some of the most concrete artificial intelligence regulations in the U.S., which has not substantively tackled the issue at the federal level. Its proponents say Washington can’t wait for federal guardrails because the government is already deploying AI systems with real-world consequences.
If enacted, public agencies in Washington state would be prohibited from using automated decision systems that discriminate against different groups or make final decisions that impact the constitutional or legal rights of a Washington resident. The bill also bans government agencies from using AI-enabled profiling in public spaces. Publicly available accountability reports ensuring that the technology is not discriminatory would be required before an agency can use an automated decision system.
The ACLU of Washington and other digital rights groups are backing the bill, which is sponsored by Sen. Bob Hasegawa (D-Beacon Hill).
Hasegawa called AI decision making systems “one of the most insidious” technologies that impacts “how and what we do during the day, every day” during a January hearing of the Senate State Government and Elections Committee.
“Everything from insurance ratings, to where to locate grocery stores, to you name it,” he said. “The most important disparity is in how it treats people of color. There’s no shortage of data on how those disparities exist.”
The bill’s backers highlighted real-world examples in which artificial intelligence is already discriminating against historically marginalized groups. For example, when the U.S. Justice Department ordered the early release of low-risk prisoners vulnerable to COVID last spring, the Federal Bureau of Prisons planned to use an automated risk assessment tool called PATTERN. The algorithm determined just 7% of Black men were low-risk enough for early release compared with 30% of white men, according to DOJ assessment highlighted by The Marshall Project.
“This is a bill that would truly be groundbreaking,” said Jennifer Lee, manager of the ACLU of Washington’s Technology and Liberty Project, in an interview with GeekWire.
“What this bill would give Washington the opportunity to do is set a precedent, raise awareness of the issue that peoples’ lives are being affected by algorithmic decision making tools,” she added. “Washington really has the opportunity to show that we take AI and algorithmic bias very seriously.”
During the hearing, representatives from the law enforcement and tech communities testified asking that legislators clarify which technologies would be subject to the regulations. They are concerned that standard uses of automation, such as red light cameras or fingerprint analysis, could face an undue burden under the law.
“We absolutely agree the with the goals of this legislation and agree with the need,” said Vicki Christophersen, a lobbyist for the Internet Association. “We just want to make sure there aren’t unintended consequences on uses that are pretty standard: red light cameras, speed zones, the use of identity systems, employee screening criteria on objective data such as years of experience required, those types of things.”
James McMahan, policy director at Washington Association of Sheriffs and Police Chiefs, said he expects routine screening of law enforcement candidates and DNA and firearms analysis to be subject to the regulations.
“We have agencies that will use crime reports [and] algorithms to suggest where we allocate our patrol resources in the highest areas of criminal activity in their jurisdiction,” McMahan said during the hearing. “Many of these, we think we would all agree, are legitimate public uses so we would ask for that continued conversation.”
But some civil rights activists argue that even those seemingly innocuous use cases — like algorithms that determine neighborhoods with the highest crimes rates — can unintentionally perpetuate systemic discrimination.
“It depends on a history of where police have policed in the past, rather than predicting where crime will occur in the future … this kind of algorithm can really replicate existing racial biases in policing as opposed actually decreasing crime,” Lee said.
If enacted, Washington would pioneer AI regulation in the U.S. A handful of states attempted to pass various AI regulations in 2020 but those efforts were not successful, according to the National Conference of State Legislatures, which tracks bills like Washington’s. New Jersey is also considering a bill that would forbid certain types of discrimination by automated decision systems in its current legislative session. Other states have passed bills that pledge to review or study the impact artificial intelligence technology will have.
The bill under consideration in the Washington state legislature passed the State Government & Elections Committee last week and has been referred to Ways & Means, which reviews legislation that could impact the budget.