Nvidia share price reflects ups and downs of artificial intelligence

Artificial Intelligence
Image credit: source

Nvidia’s share price over the past five years is a good representation of the frenzy about artificial intelligence that has occurred over the same period. From the start of 2014 until September, Nvidia’s price rose by more than 1,500 per cent. Since then, its shares have roughly halved in value.

The company has done nothing wrong. Nvidia produces capable graphics-based chips (GPUs) and makes good margins doing so. It has taken its technology and applied it creatively to AI, or machine-learning, problems. It was rather over-optimistic about reducing its inventory of older chip designs and the prospective sales of its new designs. Nothing scandalous or shocking, though.

Along with being an actual company, however, Nvidia became the symbol of the AI age.

The crash in the stock is the investment world’s way of accepting what data scientists and engineers have been saying for months or even years: machine learning is useful but it does not replace the human mind. Or even a lot of animal or insect minds, for that matter.

Machine learning did get a lot better after 2012. Vision devices, natural language processing, speech recognition and the analysis of big data made leaps. In the past year or two, though, progress has slowed.

This is normal. Machine vision, for example, became much better in the mid-1980s, not least because there was a new generation of microprocessors that made the techniques more accessible for industrial applications such as circuit-board inspection. Then progress slowed to a walk. AI progress is like that: rapid development followed by years of more sober and incremental work.

The most recent boom in machine learning (and budgets for machine learning) was brought about by a confluence of theoretical, engineering and commercial advances.

The key theoretical advances were in “deep learning” in neural networks, which arguably had started with a 1986 paper on “backpropagation” by Geoffrey Hinton of the University of Toronto. The algorithms based on backpropagation became more implementable after 2012.

This is about organising layers of artificial neurons to perform a sort of self-diagnosis of errors in series of inferences, allowing them to correct the errors by testing conclusions against a training set of inputs (such as images) and labelled outputs (names for types of images). The machines could learn over time to recognise a Russian tank, a cat or a hot dog.

Mind you, a lot of training data are necessary to make this process work. Pictures of hot dogs and “not hot dogs”, for example. Or the endless translations produced by the EU of documents in member countries’ native languages.

The key engineering advance was the wide availability of fast semiconductor graphics accelerator chips, such as those made by Nvidia. The company had been a niche producer of graphics cards for gamers before it became the giant of AI devices and bitcoin miners.

Nvidia’s millions of gamers had a visceral need for ever faster and more detailed renderings of things being blown up or sliced to pieces.

A lot of training data are needed for a computer to know the difference between hot dogs and hot dogs © Wolfram Steinberg/AP
A lot of training data are needed for a computer to know the difference between hot dogs and hot dogs © Wolfram Steinberg/AP

With the money that its customers probably should have spent on taking actual people on dates and starting families, Nvidia was able to rapidly improve the speed at which its GPUs processed problems such as destroying electronic monsters, providing proof of work for cryptocurrencies or analysing images.

Along with the elaborated backpropagation algorithms and fast, cheap GPUs, machine-learning practitioners also began to have the much larger and well annotated data sets generated by social media and search companies. Other corporations and government organisations with huge piles of data realised machine learning was another way to realise the commercial value of their information.

So the AI machine-learning bubble started to inflate. From the beginning, though, the actual data scientists tried to point out the limits of the technology.

One limit, for example, is in the interpretability of machine learning’s results. Machine learning is more of a black box than other sorts of computation. It is not easy to reverse the chains of inference to understand how the machine makes its decisions.

The fuzziness of the inference process makes it difficult to justify using a machine learning program’s output as a determinate result. You may have noticed this with the machine learning programs that banks use to detect credit card fraud. There are a lot of false positives.

Unlike natural intelligence, machine learning has a hard time working with thin data sets. This is not a problem with, say, analysing chess or go strategies, because the program can run millions of games to create its own training set. It is harder when you have changing traffic and weather, not to mention how shadows will change over a day or season — and your machine learning program is trying to drive your car.

That is the sort of problem where humans can make decisions (not always right) based on thin data. So autonomous self-driving cars, one of the biggest sources of gas for the AI bubble, are further off than we might have hoped.

And the financial world, always eager to buy a box that will own the world, does not bring enough data to the game. Price series may look data intense to the business graduate, but they are as nothing compared to video images. Also, more rapid-fire trading creates more data points, but most of the data added is noise rather than information.

True artificial intelligence will require a lot more science. That probably means breakthroughs in understanding the processes of animal and human minds.

(Excerpt) Read more Here | 2018-11-30 10:00:00

Leave a Reply

Your email address will not be published. Required fields are marked *