A.I. ‘bias’ could create disastrous results, experts are working out how to fight it

Artificial Intelligence
Image credit: source

Biased AI can have serious life-altering consequences for individuals.

It was reported in 2016 that the COMPAS program — or Correctional Offender Management Profiling for Alternative Sanctions — used by U.S. judges in some states to help decide parole and other sentencing conditions, had racial biases.

“COMPAS uses machine learning and historical data to predict the probability that a violent criminal will re-offend. Unfortunately it incorrectly predicts black people are more likely to re-offend than they do,” according to a paper by Toby Walsh, an artificial intelligence professor at the University of New South Wales.

While biases in AI exist, it is important that certain decisions are not left to software, Walsh told CNBC.

That’s especially when such decisions can directly harm a person’s life or liberty, he added.

Examples of those decisions include the possibility of AI being used in hiring decisions — or used during military conflicts as part of autonomous weapons.

“If we work hard at finding mathematically precise definitions of ethics, we may be able to deal with bias in AI and so be able to hand over some of these decisions to fairer machines,” Walsh said. “But we should never let a machine decide who lives and who dies.”

(Excerpt) Read more Here | 2018-12-14 10:49:31

Leave a Reply

Your email address will not be published. Required fields are marked *