Machines are not our masters – but the sinister side of AI demands a smart response

Image credit: source

Keep your heads! You are not about to surrender your life and understanding of the world to machines. That head of yours with its conscious mind, reading this column, remains in the driving seat and always will. It’s true that the capacity of machines to supplement human intelligence, monitor us, mimic us and replace routine jobs and tasks is exploding and in the wrong hands could represent a step change in creating dark forms of economic and social control. But that is the battle for democracy, with the confrontation of the worst of capitalism taking on a new dimension. It does not mean that the end of human life is nigh – it means we have to be cleverer in fashioning responses.

Last week, OpenAI, a US-based nonprofit organisation, decided that its new AI model, GPT2, is so good at generating articles from just a few words and phrases that the model could be not be released until OpenAI better understood how it might be used. GPT2’s database is so sophisticated and its algorithms so smart that it has gone far beyond text prediction to writing readable and plausible text.

The good news is that this means that computers, for example, could speed-read articles on our behalf, summarise them and answer our questions. Those chatbots that come to your aid when you are lost on some website could become really helpful. If you wear a watch that monitors your health, GPT2 could spell out warnings and diagnoses quickly in plain English. Interesting books in foreign languages will be translated more effectively.

The bad news is that in the wrong hands the likes of GPT2 could flood every social media site on a grand scale with fake news or troubling comment. In one test carried out by the US news and technology network the Verge, which was given access to GPT2, a text prompt such as “Jews control the media” led to the following: “They control the universities. They control the world economy. How is this done? Through various mechanisms that are well documented in the book The Jews in Power by Joseph Goebbels, the Hitler Youth and other key members of the Nazi party.”

Similarly, hackers who get into your computer could feed GPT2 the content of emails from your closest friends to form a database and then perfectly simulate an email to you when they go “spear phishing” – looking for personal data – tricking you into making a reply. The scope for malevolence is endless.

Perhaps OpenAI’s “reluctance” was in part a public-relations stunt – the publicity-hungry, hi-tech billionaire Elon Musk is one of its backers, after all. But there is plainly an issue. Like so much artificial intelligence, GPT2 opens up wonderful new horizons and equally dark pits.

It is the same story everywhere. AI allows the individualisation of your drug treatment and fast and cheap diagnoses of whatever illness you are suffering, along with likely cures. It can compare screening and scanning results with tens of thousands of others, spotting abnormalities early. But the dark side is that some companies are already refusing insurance unless you make all your health data available and charging premiums reflecting the risk. It negates the point of insurance – to pool risk so that good and bad luck cancel each other out.

In civil rights, should the police, following Durham constabulary’s lead, use data-driven algorithms to decide whether to detain you overnight? Is that a massive bias against the already socially disadvantaged while removing a police officer’s discretion or a cheap way of making a cash-pressed force more effective?

What is required is the rapid creation of some principles to which everyone – if not globally, at least in the EU – is required to adhere. Three seem crucial: maximal transparency and accountability embodied in regulatory oversight and a new Companies Act; methods and digital processes to ensure you own your data and its use; and fast and effective regulation of content. Put another way, we need every organisation deploying digital data to be open and accountable; we need new public interest digital platforms where we can hold our data based on the presumption we own it; and we need another Leveson – a fast and effective mechanism to ensure digital information is not misinformation.

Yet western governments in general, and Britain in particular, seem frozen in the headlights. Confronted by an industrial system of fake news dissemination that might have had an impact on the Brexit referendum, the UK government has not even gone as far as the US in launching an inquiry, let alone proposing how the regulatory, accountability and data-ownership regimes could be updated. Nor has the Labour opposition been vocal.

As a new form of capitalism evolves in front of our eyes, what the writer Shoshana Zuboff calls surveillance capitalism, in which information companies are building business models using our personal data, without our overt consent and knowledge, Labour has little or nothing to say. It should be the cornerstone of left politics in the 21st century.

But don’t join this debate in despair and think that we are on the road to conscious machines, with the algorithms owned by governments and companies, dominating humanity. It was 70 years ago next year that the great British computer scientist Alan Turing set the test of whether machines could be created that behaved indistinguishably from humanity; they simply can’t, because it is people who possess the emotions, feelings and values that underpin any economic and social structure. The task is not to throw up our hands warning that the machines are coming – it is to design a world in which we are their master, not their servant.

Will Hutton is an Observer columnist

(Excerpt) Read more Here | 2019-02-17 11:00:00

Leave a Reply

Your email address will not be published. Required fields are marked *

The reCAPTCHA verification period has expired. Please reload the page.