Szókincs fejlesztés: AI Milestones So Far
Artificial Intelligence (AI) is the hot topic of the moment in technology, and the driving force behind most of the big technological breakthroughs of recent years.
In fact, with all of the hype we hear about it today, it’s easy to forget that AI isn’t anything all that new. Throughout the last century, it has moved out of the domain of science fiction and into the real world. The theory and the fundamental computer science which makes it possible has been around for decades.
The Most Amazing Artificial Intelligence Milestones So Far
Since the dawn of computing in the early 20th century, scientists and engineers have understood that the eventual aim is to build machines capable of thinking and learning in the way that the human brain – the most sophisticated decision-making system in the known universe – does.
Today’s cutting-edge deep learning using artificial neural networks are the current state-of-the-art, but there have been many milestones along the road which have made it possible. Here’s my rundown of those that are generally considered to be the most significant.
1637 – Descartes breaks down the difference
Long before robots were even a feature of science fiction, scientist and philosopher Rene Descartes pondered the possibility that machines would one day think and make decisions. While he erroneously decided that they would never be able to talk like humans, he did identify a division between machines which might one day learn about performing one specific task, and those which might be able to adapt to any job. Today, these two fields are known as specialized and general AI. In many ways, he set the stage for the challenge of creating AI.
1956 – The Dartmouth Conference
With the emergence of ideas such as neural networks and machine learning, Dartmouth College professor John McCarthy coined the term “artificial intelligence” and organized an intensive summer workshop bringing together leading experts in the field.
During the brainstorming session, attempts were made to lay down a framework to allow academic exploration and development of “thinking” machines to begin. Many fields which are fundamental to today’s cutting-edge AI, including natural language processing, computer vision, and neural networks, were part of the agenda.
1966 – ELIZA gives computers a voice
ELIZA, developed at MIT by Joseph Weizenbaum, was perhaps the world’s first chatbot – and a direct ancestor of the likes of Alexa and Siri. ELIZA represented an early implementation of natural language processing, which aims to teach computers to communicate with us in human language, rather than to require us to program them in computer code, or interact through a user interface. ELIZA couldn’t talk like Alexa – she communicated through text – and she wasn’t capable of learning from her conversations with humans. Nevertheless, she paved the way for later efforts to break down the communication barrier between people and machines.
1988 – A statistical approach
IBM researchers publish A Statistical Approach to Language Translation, introducing principles of probability into the until-then rule-driven field of machine learning. It tackled the challenge of automated translation between human languages – French and English.
This marked a switch in emphasis to designing programs to determine the probability of various outcomes based on information (data) they are trained on, rather than training them to determine rules. This is often considered to be a huge leap in terms of mimicking the cognitive processes of the human brain and forms the basis of machine learning as it is used today.
1991 – The birth of the Internet
The importance of this one can’t be overstated. In 1991 CERN researcher Tim Berners-Lee put the world’s first website online and published the workings of the hypertext transfer protocol (HTTP). Computers had been connecting to share data for decades, mainly at educational institutions and large businesses. But the arrival of the worldwide web was the catalyst for society at large to plug itself into the online world. Within a few short years, millions of people from every part of the world would be connected, generating and sharing data – the fuel of AI – at a previously inconceivable rate.
1997 – Deep Blue defeats world chess champion Garry Kasparov
IBM’s chess supercomputer didn’t use techniques that would be considered true AI by today’s standards. Essentially it relied on “brute force” methods of calculating every possible option at high speed, rather than analyzing gameplay and learning about the game. However, it was important from a publicity point of view – drawing attention to the fact that computers were evolving very quickly and becoming increasingly competent at activities at which humans previously reigned unchallenged.
2011 – IBM Watson’s Jeopardy! Victory
Cognitive computing engine Watson faced off against champion players of the TV game show Jeopardy!, defeating them and claiming a $1 million prize. This was significant because while Deep Blue had proven over a decade previously that a game where moves could be described mathematically, like chess could be conquered through brute force, the concept of a computer beating humans at a language based, the creative-thinking game was unheard of.
2015 – Machines “see” better than humans
Researchers studying the annual ImageNet challenge – where algorithms compete to show their proficiency in recognizing and describing a library of 1,000 images – declare that machines are now outperforming humans.
Since the contest was launched in 2010, the accuracy rate of the winning algorithm increased from 71.8% to 97.3% – promoting researchers to declare that computers could identify objects in visual data more accurately than humans.
2018 – Self-driving cars hit the roads
The development of self-driving cars is a headline use case for today’s VR – the application which has captured the public imagination more than any other. Like the AI that powers them, they aren’t something which has emerged overnight, despite how it may appear to someone who hasn’t been following technology trends. General Motors predicted the eventual arrival of driverless vehicles at the 1939 World’s Fair. The Stanford Cart – originally built to explore how lunar vehicles might function, then repurposed as an autonomous road vehicle – was debuted in 1961.
But there can be no doubt that 2018 marked a significant milestone, with the launch of Google spin-off Waymo’s self-driving taxi service in Phoenix, Arizona. The first commercial autonomous vehicle hire service, Waymo One is currently in use by 400 members of the public who pay to be driven to their schools and workplaces within a 100 square mile area.
While human operators currently ride with every vehicle, to monitor their performance and take the controls in case of emergency, this undoubtedly marks a significant step towards a future where self-driving cars will be a reality for all of us.
driving force – húzóerő
breakthrough – áttörés
hype – felhajtás
domain of science fiction – sci-fi felségterülete
fundamental – alapvető
dawn of computing – számítástechnika hajnala
decision-making system – döntéshozó rendszer
cutting-edge – úttörő
milestone – mérföldkő
to ponder – mérlegel
erroneously – tévesen
to coin the term – megalkotni a fogalmat
ancestor – felmenő
probability – valószínűség
to tackle the challenge – megbirkózni a kihívással
mimicking the cognitive processes – a kognitív folyamatokat utánozva
the importance of this one can’t be overstated – ennek fontosságát nem lehet eléggé hangsúlyozni
inconceivable – elképzelhetetlen
brute force – nyers erő
competent – jártas
accuracy – pontosság
overnight – egy este alatt
Birinyi Balázs
Legfrissebbek tőle: Birinyi Balázs (Mutasd mindet)
- Szókincs fejlesztés: AI Milestones So Far - 2019-02-07
- Kell és kellene helyes használata angolul - 2019-02-05
- Szókincs fejlesztés: How to Become More Resilient - 2019-02-05