While artificial intelligence (AI) is most closely associated with dystopian cities and anonymous robot armies, you don’t have to be a student of the sci-fi genre to understand its applications, nor to make good use of it. Setting aside the fields of electric dreams, the highly practical sub-section of AI known as ‘machine learning,’ – in which a machine develops and sustains its own intelligence – can be found hidden in plain sight, with its intellectual industry fuelling everything from Uber’s scheduling to your email’s spam filter; in short, it makes everyday essentials run quickly, smoothly, and intuitively.
Fast-forward a few centuries and the legendary cryptographer Alan Turing next took centre stage in 1947, announcing at the London Mathematical Society that ‘what we want is a machine that can learn from experience.’ Three years later, in 1950, he further expanded upon his idea when, building on Bayes’ theory, he proposed the idea of a ‘Learning Machine,’ which could learn and develop its own intelligence. Taking their cue from Turing’s incomparable vision, scientists Marvin Minksy and Dean Edmonds built the very first neural network machine which was able to learn – the SNARC – just a year later.
The 1970s were a fallow time for machine learning, with pessimism surrounding its potential leading to those years written off as the ‘AI Winter.’ But the real boom decade for machine learning was just around the corner, with the 1980s seeing a series of discoveries and developments pushing back the boundaries, with the rediscovery of backpropagation – the backbone of artificial neural networks – igniting scientific interest once again. The decade’s achievements then peaked in 1989, with the first commercial application of machine learning finding its home in Evolver, the world’s only genetic algorithm programme for personal computers, which could be used in Excel to optimise business scheduling, finance and engineering. A backgammon programme created by Gerald Tesauro in 1992 consistently surpassed the world’s best players, but its failure to capture global imagination paved the way for the moment that really mattered – IBM’s Deep Blue.
In 1997, machine learning made its most famous move so far when the world watched this boxy computer score a decisive checkmate against world chess champion Garry Kasparov in a pair of six-game matches. Later made into a documentary, The Man Vs. The Machine, Deep Blue’s victory marked a magical turning point in machine learning; the world now knew that mankind had created its own opponent.
Subsequent developments in machine learning continue to drive everything from commerce to credit, including Facebook’s ‘Deep Face’ recognition technology, recommendations on shopping sites, and even the thumbs-up – or down – for credit applications. And of course, machine learning has long been the best way to mine the treasures of BigData, with its analytical capacity far outstripping the abilities of more traditional methods.
Even in the field of cyber-crime, ‘adversarial machine learning,’ is starting to challenge its own opponents, by pitting its brain against hackers, ID theft and online fraud. So as it works to protect, serve and improve our increasingly web-based world, it’s true to say that the virtual reality of an intelligence beyond ourselves is finally here – and has only just begun.