Human rights commission tackles artificial intelligence

Image credit: source

KIM LANDERS:  From self-driving cars to facial recognition new technology powered by artificial intelligence is changing the way we live and work and how we make decisions. 

  

But what is this rapid rise of new technology doing to our Human Rights? 

  

That’s the question a new project from the Human Rights Commission is going to tackle. It’s trying to identify the issues at stake before coming up with a final report by late next year. 

  

Edward Santow is the Human Rights Commissioner. 

  

Edward Santow, Good morning.  

  

EDWARD SANTOW:  Good morning.  

  

KIM LANDERS:  What’s the biggest dilemma when it comes to artificial intelligence and Human Rights? 

  

EDWARD SANTOW:  Well, we’re seeing how our world around us is changing really rapidly. We are starting to glimpse how our personal information can be used against us.  

  

Cambridge Analytica showed how that can be used to change the very news we receive but it is also doing a new thing – it is creating new forms of discrimination known as algorithmic bias.  

  

KIM LANDERS:  Tell us a little bit more about that.  

  

EDWARD SANTOW:  Well that can lock older people out of getting a job. It can mean that certain groups such as Indigenous people are unfairly targeted by police and that is a wholly new thing.  

  

KIM LANDERS:  Privacy is a crucial human right. So much of our data is already being shared by Government agencies and private companies so how do we impose a check and balance on that? 

  

EDWARD SANTOW:  Well, what we need to do is be able to control how our personal information is used.  

 If our personal information is being used to provide us a useful service and we understand all of the secondary uses, that’s one thing but if it is at risk of misuse, that’s an entirely different one.  

  

KIM LANDERS:  There’s a lot of debate at the moment about the Federal Government’s digital health record, MyHealth. Are our Human Rights protected there? 

  

EDWARD SANTOW:  Well, I think we can do better. The first, most important issue is we should have autonomy. We should be able to control how our personal information is used.  

  

Secondly, we need to have strong protections against cyberattack and misuse and third is, I’m sure Australians are generally pleased about the idea that their personal information can be used to improve our health care but if it is also available for secondary uses that perhaps are not very helpful to us, that’s a different matter. 

  

So for example … 

  

KIM LANDERS:  So when you say we need to do better, what do you think the Federal Government needs to do with MyHealth to improve those protections? 

  

EDWARD SANTOW:  Well, we need first to understand what are those secondary uses? So for example there have been suggestions that it can be used if you are slow at paying a health care bill, to chase that debt.  

  

It can be used for a range of uses like that and that is something that is making people quite resistant to participating in the system.  

  

We also need to be conscious that this can be a honeypot for criminals who want to do us harm and so there have to be really clear, strong, robust protections against that form of misuse.  

  

KIM LANDERS:  When we talk about artificial intelligence, we’re not just talking about killer robots. Our life is already entangled with autonomous machines, intelligence systems as you’ve mentioned, controlled by technology giants so how do we hold those tech giants accountable in a legal and moral sense? 

  

EDWARD SANTOW:  Well, as you say our world has been radically reshaped around us but Human Rights have not been at the centre of these new changes in technology.  

  

We’re surrounding ourselves with evermore powerful tech gadgets but we risk sleepwalking into a world where our Human Rights are not properly protected.  

  

The project at the Human Rights Commission that I am leading is about responsible innovation. It means we want to take the opportunities presented in terms of growing our economy but we must be able to hold to account those tech companies and others who could risk our Human Rights.  

  

KIM LANDERS:  Australia doesn’t have a rulebook, an overarching regulation about artificial intelligence. Do we need to get onto that quick smart and how do we make sure that it’s effective and enforceable? 

  

EDWARD SANTOW:  The first thing we need to do is understand where are the gaps in the law that are currently being exploited and we need to fill those gaps to make sure that artificial intelligence and other new technology serves humanity.  

  

KIM LANDERS:  As we get more human-like in artificial intelligence systems and machines, should eventually these machines, these robots themselves be given Human Rights? 

  

EDWARD SANTOW:  Well, I think we’re a little bit off that. I mean, I think there are all kinds of very exciting prognostications about where artificial intelligence is going and we can cross that bridge when we come to it but right now we’ve got some very concrete, real issues that we need to deal with.  

  

Privacy, I think people already start to understand. I think what people understand less is that other really basic human rights like equality and discriminations, those are the sorts of things that are in play right now and we have to deal with those effectively.  

  

KIM LANDERS:  Edward Santow, thank you very much for joining AM.  

  

EDWARD SANTOW:  Thank you.  

  

KIM LANDERS:  The Human Rights Commissioner, Edward Santow.  

  

  

  

 

(Excerpt) Read more Here | 2018-07-25 03:23:56

Leave a Reply

Your email address will not be published. Required fields are marked *