This note is for those who like to play on-line games
Machines that can outwit the smartest brains
By Robert Blincoe
Published: June 26 2009 22:27 | Last updated: June 27 2009 01:35
ProBot is waiting for you. If you’re looking for a game of backgammon, he’s your man. Blackjack, too, and gin rummy, poker, dice or dominoes, all for stakes of £1,000, though you can play for less if that’s a little rich for you. He’ll take you on any time of the day or night: simultaneous sessions, multiple opponents, whatever you like. All the games he plays are a mix of skill and chance, so if you have those on your side, you can beat him. ProBot definitely isn’t invincible. Rather, he’s just good enough to keep you interested.
ProBot is a piece of software: in the language of his creator, Martin Smith (pictured top), he’s an “executable” – he carries out tasks according to a coded set of rules. He’s male, because that’s how software experts who write programs like ProBot refer to their creations, not “she”, as they might with a car or a yacht. He also has little brothers. Smith has created a family, in which ProBot sits alongside RookieBot and AmateurBot, who are programmed to play well but make a statistically modelled number of stupid errors, mimicking the way humans play. Then there’s BeginnerBot. His job is to play against you for free, to help you increase your confidence and sharpen your skills.
These disembodied opponents are good teachers because they play well both strategically and tactically, unlike a good human player who, in backgammon for example, will tend to deviate from best play in order to punish a lesser opponent’s mistakes. If you go online to play backgammon, you can take on other human players who are also online or you can challenge one of Smith’s programs. And if you choose that latter option, you will be at the coalface of artificial intelligence in the commercial world.
These are contests that take place on constantly shifting terrain. Smith releases his latest version of a program: it wins for a couple of weeks, then the humans figure out how to beat it and they win for a couple of weeks while Smith goes away and works on his software – algorithms, probability calculations, search techniques – and scratches his head. Then he comes back with a new version of the program that wins for a couple of weeks, while the humans go away and think about it. “We have this ratcheting up,” Smith says. “It’s a very intellectually rewarding thing.”
At the moment, a bug in ProBot means it has been doing badly at Hypergammon, a simplified version of backgammon, and the smart humans have been beating it for £500 stakes. Smith has figured out his fix but he’s waiting to spring it on them at a busy time, to earn some money back. “It should catch them by surprise – then it will be on with the fun and games,” he grins, relishing the constantly evolving challenge of the artificial intelligence arms race.
“My strategies have to work against every player with every conceivable strategy. All they have to do is fine-tune their strategy to just beat me.”
. . .
May 11, 1997: Garry Kasparov loses to Deep Blue
World chess champion Garry Kasparov peers out from beneath a furrowed brow. He is 19 moves into game six against Deep Blue, IBM’s supercomputer, and things aren’t looking good, writes Anna Chambers. He knows he has made a poor move. A minute shake of the head betrays his disbelief. He sits for a moment, despondent. But within seconds he is up and out of his chair, striding away from the table, arms outstretched. It’s over.
The computer has won.
Kasparov’s nemesis, Deep Blue, weighed in at 2,800lb, a featureless black box concealing more than 500 processors primed to examine 200 million possible moves per second. Apart from the “brute force” technique, Deep Blue learnt strategy by sparring with grandmasters.
The distress of the Azerbaijan-born champion only highlighted Deep Blue’s inability to feel. But the victory divided opinion on the nature of intelligence, the future of artificial intelligence – and chess as a measure of the mind.
IBM was quick to point out that its prodigy was “less ‘intelligent’ than even the stupidest human”, but the world was unsettled by Deep Blue’s fathoms of silent thought. As Kasparov conceded: “When I see something that is well beyond my understanding, I’m afraid.”
The limits of artificial intelligence become most obvious in retrospect. However impressive the advances in computing seem at the time, the ultimate goal of artificial intelligence – a machine that can think like a human, Hal the computer from the film 2001 – is always just over the horizon. Smith’s own favourite quote about the field is that “AI is whatever Computers can’t do yet”. “People tend to move the goalposts,” he says. “Fifty years ago it was chess – ‘We’ll know we’ve got intelligent programs when we get a program that can beat a grandmaster at chess.’ Now we have and everyone says, ‘That’s not AI, it’s something else.’”
ProBot certainly doesn’t think like a human. Nor did IBM’s Deep Blue when it made history by beating the world chess champion Garry Kasparov in 1997. Like Deep Blue, ProBot exploits the great strength of computers: carrying out large numbers of individually simple calculations very quickly and very accurately. Humans, by contrast, use sophisticated evaluations of a small number of possible move sequences. The more Smith develops his ideas, the more he realises how far away we are away from creating sentient, thinking machines. “None of this stuff is anywhere near that,” he says.
“There’s a very big difference between computer intelligence and human intelligence. This is clearly indicated in games. Back in the ’70s, it was considered that in order to get computers to play chess like a grandmaster, they had to think like grandmasters. That turned out to be wrong. Computers can play fantastic chess and they don’t do anything remotely like what human grandmasters do.” He believes that Kasparov’s defeat in 1997 happened because the champion failed to prepare properly before taking on Deep Blue, made a catastrophic beginner’s mistake and so did a disservice both to himself and to the field of artificial intelligence.
Smith is not alone in pushing the development of computer science in order to take on humans at their own games. Complex board and card games have long been the testing ground for artificial intelligence. The idea of creating a mechanical chess player was one of the inspirations behind Charles Babbage’s Difference Engine in the 19th century. It was the beginning of computing. In 1948, Alan Turing, the father of modern computer science, developed the basic algorithms still used in chess programs. And it is this long conversation between the rule-based worlds of games and computing that has placed the relatively young business of online gaming at the forefront of developments in artificial intelligence.
But this being the commercial world, compromises have to be made. Deep Blue’s chess match against Kasparov was effectively an experiment in which as much processing power as possible was thrown at the problem in order to establish whether a really fast machine could defeat a prodigiously gifted human. In his battles with punters, Smith cannot call on the unlimited processing power of a supercomputer that has plenty of time to make its decisions. His programs might be playing 100 different games at the same time, and making their moves in about a quarter of a second.
The trick Smith must pull off is to find the best balance between his processing power – how far his program can look ahead in a game – and the rule-based techniques derived from his expert understanding of game strategy. Imposing too many rules restricts how far ahead in the game ProBot can look in the time available. Different games require different methods. If these parameters changed, Smith would let the humans take on ProBot for stakes above £1,000. But as it is, some people make money against ProBot. Some make a living. Smith’s job is to make sure they don’t win large amounts consistently.
. . .
Every morning, straight out of bed and after bathing his son, Smith is checking that no one has taken his artificial players to the cleaners overnight. Even when he’s on holiday, the routine remains the same. He is a highly competitive individual and loves the challenge. He has the air of someone with a misspent youth behind him, and despite his PhD in artificial intelligence, he looks as though he would be more comfortable in the pub or at the football than on a university campus.
Smith was born in north London in 1967. He won the London junior chess championship at 15, adding his name to the who’s who of British chess. He liked playing at the Black Rook, a gaming dive underneath the White Horse pub in Hampstead, north London. “I was supposed to be studying for A-levels but I was hustling chess down there for 50p a game. It’s where I discovered poker.” Older guys in their 20s and 30s were thinking, ‘We’ll have his pocket money.’ They were in for a surprise. Smith read every book he could find and dealt himself hand after hand.
After messing up his A-levels, he headed to Los Angeles, where he ended up staying for three and a half years, playing poker and chess for a living and outstaying his tourist visa. He made money, bought a flash hi-fi and dreamed of a bigger apartment and a car. He lost money and sold the hi-fi for half what he’d paid. “I quickly learnt to plan,” he recalls.
It had started out as glamorous and exciting but ended up a grind. He decided to return to the UK after US immigration sent him a polite letter offering him 48 hours to leave the country. “I got fed up with the whole thing. I was never going to break the big time.”
Arriving home aged 22 (he’d spent his 21st birthday in a poker room and not telling anyone), he played more poker and coached chess. The father of one of his pupils was the dean of Middlesex University, who encouraged him to apply for a place on its new degree course in artificial intelligence. Smith was already tinkering with games and software by that point, and the course suited him. He didn’t leave the academic world until he was 33, having happily studied and gambled the time away.
Now 42, Smith is in charge of technology for GameAccount Global, a job he was head-hunted for in his late 30s. GameAccount provides online games for the household names of sports betting – Gala Coral, Rank, Stan James, BoyleSports, Sportingbet Europe, William Hill, Betfred, PaddyPower. Its business works by bringing together customers from all the big online bookies with whom it works in a single virtual games room. A hundred websites feed this pool of shared customers, which means plenty of liquidity and which enables all the companies to promise their customers enough opponents to play at their price or skill level. Some £102m was staked in the three months to May 2009. As an example of margins, GameAccount’s chief operating officer, Dermot Smurfit, says that in February, £30m was won by players in the network.
In Smith’s online games arena, about 1,000 players at a time can congregate, looking for action. Humans challenge humans. Humans challenge bots. Bots tempt humans. Initially, Smith’s bots, which are all clearly labelled as such, were just there to help build up enough customers and get the money flowing. But it turned out that people liked playing against them, and so the virtual players became integral to the commercial proposition.
Although Smith’s software has turned out to be part of the reason punters like to come and play, this wasn’t the reason GameAccount hired him. He was originally brought in to work out whether rogue players were using backgammon software to beat other customers. They were – it was easy.
Rise of the machines
Charles Babbage, originator of the concept of a programmable computer, sketches out plans for a chess-playing machine.
British mathematician Alan Turing and US electronic engineer Claude Shannon independently develop the basic algorithms still used in chess programs.
The Othello program The Moor, written by Mike Reeve and David Levy, wins one game in a six-game match against world champion Hiroshi Inoue.
TD-Gammon, developed by IBM’s Gerald Tesauro, plays backgammon at a level nearly equal to that of the best human players. It also influences the play of its living counterparts.
Former world champion draughts (checkers) player Marion Tinsley resigns, for health reasons, in an even match against Chinook, a program written by a team from the University of Alberta. Chinook beats grandmaster Don Lafferty the following year in a 32-game match. The final score is 1-0 with 31 draws.
IBM’s Deep Blue beats world chess champion Garry Kasparov. The machine wins a six-game match two-one with three draws.
Logistello, written by Michael Buro, defeats the world Othello champion Takeshi Murakami, 6-0.
Scrabble program Maven, developed by Brian Sheppard, beats grandmaster Adam Logan nine games to five. The average score is 417.3 to 388.6.
Backgammon is perhaps the game that has changed most because of artificial intelligence. Through a neural learning technique developed in 1991, a bot can learn the game by playing hundreds of thousands of games against itself, and so master optimal strategy. “That genuinely was a breakthrough in terms of AI as well as games,” says Smith. His own doctoral thesis was based on this machine learning. The technique doesn’t work for all games, but the development means there are now two well-known commercial backgammon programs – Snowie 4 and the free, open-source GNU Backgammon – which can match or beat the best human players (depending on whether you ask a backgammon pro or an AI brain).
It was obvious that people would try a little online hustling, sitting at their PCs with champion software running alongside to make their moves. One gaming site discovered that its top 14 backgammon players were software programs.
If you want to cheat against GameAccount’s bots, Smith says, “bring it on. I rate my bots as being pretty decent at these games”. Cheating against other punters, however, is another matter. Smith’s team has developed software that detects rogue players who are using backgammon software. “We have an automated system which plays over the games and looks for similarities in the play to the way computers play. It’s almost like fingerprinting. Over enough games the only way they could play the moves they have done is with certain bits of software. I’ve got copies of all the programs the cheats might be using. We can even say to someone, ‘You’ve been using GNU Backgammon, and it’s this version, set to these settings.’” It is not illegal to use this software, but it is in breach of the gaming sites’ terms and conditions. Smith will confiscate winnings before showing an offender the door.
. . .
A guy called Smellysocks has beaten ProBot at Naval Wars, the site’s version of Battleships. Smellysocks was so good that Smith thought he was cheating, or even a computer himself. Watching Smellysocks on a Friday, he’s taking an inexperienced punter for some quick £5 wins – probably someone who has logged on after an evening in the pub. The destroyer, frigate, sub, tug and aircraft carrier are all ablaze after some spookily accurate firing.
Smith needs to work on his code, but the problem is that Battleships isn’t a popular enough game to warrant the time he’ll need to spend figuring out his options and then writing the software in a high-speed, late-evening coding session at home. That’s a problem for Smellysocks, too. He’s the only player offering games of Naval Wars, and with his software opponent having retired hurt, there are few takers. He’s a victim of market forces – a winner at a game that no one wants to play.
For once, not even Smith’s band of software players can provide enough opposition to make the market for Naval Wars work. In plenty of other cases, however, there are niche markets, and for many of the players who pitch up, it is Smith who gives them a reason to stay. Experts like him may be a long way from creating machines that can think like humans, but for these players, that’s not a problem. The reason they like to take on his bots is precisely because they do not think and behave as if they were human.
“When you’re beating them they’re not bitching or moaning,” says Smith. “When they’re beating you, they’re not boasting. They’ll never stop playing when they’re in front or stop paying when they’re behind. They’ll play fast. They won’t say, ‘I’ve got to go to bed now.’ They won’t be saying, ‘I’m just going to make a cup of tea,’ and not come back. There’s none of those annoying human frailties.”
Robert Blincoe is a freelance journalist specialising in technology and gambling
Go: the game we still win – just
If you want to beat the smartest software in the world at a classic board game, the only choice you have left is Go. The strategy game, which originated in China 4,000 years ago, is regarded as the last bastion of human gaming superiority, a rare point of resistance to the onslaught of raw processing power combined with artificial intelligence techniques.
By 2000, chess, draughts, Othello, backgammon and Scrabble had all been conquered by machines. But Go, on an 18 x 18 squares board, is still just too complex.
The game is played by two players who alternately place black and white stones (playing pieces, now usually made of glass or plastic) on the vacant intersections of a grid of 19×19 lines. The object of the game is to control a larger portion of the board than the opponent. A stone or a group of stones is captured and removed if it has no empty adjacent intersections, the result of being completely surrounded by stones of the opposing color. http://en.wikipedia.org/wiki/Go_(game)
Go pieces are called stones. They are black or white and otherwise identical. Playing alternately, the object is to use one’s stones to surround as many blank intersections on the board (called “territory”) as possible. Games typically comprise a couple of hundred moves.
At each turn, especially at the start of the game, there are hundreds of possible moves (the board has 361 points, compared with chess’s 64) and deciding which will turn out better further down the line is extremely problematic as there are a huge number of almost-equal possibilities and moves over which to compare them. The methods that work for chess – picking the move that gives your opponent the fewest high-value moves in future – don’t work in Go. Professional players often act intuitively, and precisely encoding their knowledge into machine-understandable rules has so far proved a dead end.
However, the endgame, with computers finally victorious, is in sight. Development around a new algorithm, combined with increasing processing power, means machines will probably be able to beat any human in 20 years’ time, and possibly as soon as five.
Copyright The Financial Times Limited 2009
Last edited by MarshalStealth; 06-30-2009 at 10:13 PM.