Wednesday 2 November 2016

Artificial intelligence

This post by way of being a report of a discourse at the Royal Institution in Albemarle Street, given on Friday, 28th October.

Professor Christopher M. Bishop started life as a physicist but, overwhelmed by the data generated by big physics experiments, moved across to computing and is now described by wikipedia as majoring on machine learning, neural networks, pattern recognition and natural language processing. He is also the author of ‘Pattern Recognition and Machine Learning’ a fat, but rather good textbook from Springer which sits, somewhat read, on the bookshelf behind me. Good coverage of all the basics, basics on which I, for one, am rather rusty. I first came across him giving lectures on the internet – and I thought that they were very good; maybe distance learning will catch on.

He is now Laboratory Director at Microsoft Research Cambridge. Not very clear what that might be, but he wore a smart suit and gave a very polished talk – with more than a touch of Microsoft flavouring. More suit than beard these days.

We started with a quick canter through the computer programs which play world class chess, Jeopardy! (a quiz show, see reference 1) and go. The program from the DeepMind people which learned how to play a bunch of arcade games with little more to go on than screen shots. Plus a neat definition of artificial intelligence as the sort of intelligence that computers do not yet have.

Another interesting problem was the classification of images by computer.

And another was trying to work out what films you are likely to like, based on a few hints from yourself and a large database – or more precisely, a large but sparse matrix – of the likes and dislikes of other people, with it being almost certain that your likes and dislikes are going to be enough like those of those other people, at some level or other, for the computer to get a grip on. With this one being the subject of a rather neat video demonstration with a screen full of film posters jumping about, a demonstration which I think I have seen before, or at least read about before, but cannot now track down – although the people at references 2 and 3 may well be doing something of the same sort. And I now know that picking out movies for you without you having to take much trouble about it is big business; there are lots of sites out there offering to do it for you.

A big teaching point for me was learning what the deep in deep learning was. So we started with the perceptron, which were used to make the sort of neural network which has just one hidden layer. We saw pictures of one made in the 1950’s with lots of wires and lots of very chunky looking components, about the size of jam jars, in racks. Such networks were very good at a certain class of problems. Then someone worked out how to make a network with two hidden layers work, how to train them, and these networks could tackle a rather larger class of problem. Deep learning is where you have lots of layers, and can tackle really serious problems. From where I associate now to the combination of bottom up and top down processing, much talked of by neurologists.

He talked of one demonstration which was new to me, which was an executive from the US going to give a talk to an audience of thousands in Beijing, a talk which was given in English and translated by a computer, in real time, into spoken Mandarin Chinese. A quick google does not turn up this particular talk, but it does turn up something which looks similar, a talk by Mark Zuckerberg of Facebook. It may all be to do with a chap called Yang Liu, Associate Professor in the Department of Computer Science and Technology at Tsinghua University.

Part of this was powered by a switch from tricky logic to dumb data. There were huge amounts of data about now and huge amounts of computer power. One did not need to attack problems head on anymore, trying to encode their logic; just hit them with training on the data and statistics on the data. For the moment, it still helped to throw in a bit of subject matter expertise, like suggestions about good ways to structure a visual image – but even that may not be necessary for all that much longer.

We then moved into probability and statistics, into the war of the Bayesians with the Frequentists. Enlivened by a neat but simple demonstration of the non-transitivity of loaded dice. Dice A would beat B, B would beat C and, this being the bit that is not supposed to happen in well behaved systems, C would beat A. The point being that tricky things could happen in the world of probability. Google got this one in one – see reference 4.

It was pointed out that one of the pluses of neural networks was that they could soak up a lot of damage. You could cut a lot of the connections or kill off a lot of the neurons, but they would still work after a fashion. Which is more than you could say of a computer program written along more traditional lines.

A puff for a monster computer being built by Microsoft, powered by chips which can reconfigure themselves on the fly, to the task at hand.

But I was sorry at the way the talk ended, with Bishop describing himself as an optimist who thought that computers and computing were a force for the good. That the march of the robots was a force for the good. While I felt that he was dismissive about the social problems that were likely to come too. It was not enough to say that surely it is a good thing that a clever computer program can speed up the work of a busy radiologist by a factor of ten. All a bit too glib – and, well, someone from somewhere like Microsoft would say something like that, wouldn’t they?

I also feel that we make quite enough of a mess of running the world the way things are now and I have little confidence that we will be able to manage the robots. Some of us may accumulate vast wealth, rather more may well get cheap ice cream and a long life – but what else are we going to get with them?

Putting considerations of impending doom aside, we finished the evening at the Halfway House at Earlsfield, sufficiently quiet by the end of this Friday evening for the cheerful young barmaid (from somewhere foreign, of course) to spend quality time with us. Maybe she mistook us for munificent geriatrics.

No aeroplanes on the platform.

Reference 1: https://www.jeopardy.com/.

Reference 2: https://www.tastekid.com/.

Reference 3: http://www.strong.io/blog/deep-neural-networks-go-to-the-movies.

Reference 4: https://en.wikipedia.org/wiki/Nontransitive_dice.

No comments:

Post a Comment