In recent years, AI systems have been shown to be unfair sometimes, said researchers at Massachusetts Institute of Technology (MIT) in the US.
This is dangerous as such systems are increasingly being used to do everything from predicting crime to determining what news we consume, they said.
“Last year’s study showing the racism of face-recognition algorithms demonstrated a fundamental truth about AI: if you train with biased data, you will get biased results,” said MIT PhD student Alexander Amini.
The new algorithm can learn both a specific task like face detection, as well as the underlying structure of the training data, which allows it to identify and minimise any hidden biases.
In tests, the algorithm decreased “categorical bias” by over 60 per cent compared to state-of-the-art facial detection models — while simultaneously maintaining the overall precision of these systems.
A lot of existing approaches in this field require at least some level of human input into the system to define specific biases that researchers want it to learn.
“Facial classification in particular is a technology that’s often seen as ‘solved,’ even as it’s become clear that the datasets being used often aren’t properly vetted,” said Amini.
“Rectifying these issues is especially important as we start to see these kinds of algorithms being used in security, law enforcement and other domains,” he said.
The system would be particularly relevant for larger datasets that are too big to vet manually and also extends to other computer vision applications beyond facial detection, reserachers said.
(This story has not been edited by Business Standard staff and is auto-generated from a syndicated feed.)